Jan 22 14:14:02 crc systemd[1]: Starting Kubernetes Kubelet... Jan 22 14:14:02 crc kubenswrapper[5099]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 14:14:02 crc kubenswrapper[5099]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 22 14:14:02 crc kubenswrapper[5099]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 14:14:02 crc kubenswrapper[5099]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 14:14:02 crc kubenswrapper[5099]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 22 14:14:02 crc kubenswrapper[5099]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.568857 5099 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574533 5099 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574565 5099 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574574 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574583 5099 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574592 5099 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574599 5099 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574607 5099 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574615 5099 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574622 5099 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574643 5099 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574651 5099 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574658 5099 feature_gate.go:328] unrecognized feature gate: Example Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574666 5099 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574673 5099 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574680 5099 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574689 5099 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574697 5099 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574705 5099 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574713 5099 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574720 5099 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574728 5099 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574735 5099 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574743 5099 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574751 5099 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574758 5099 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574766 5099 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574773 5099 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574780 5099 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574787 5099 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574793 5099 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574804 5099 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574814 5099 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574821 5099 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574838 5099 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574845 5099 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574852 5099 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574860 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574867 5099 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574874 5099 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574881 5099 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574888 5099 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574899 5099 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574909 5099 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574917 5099 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574924 5099 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574931 5099 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574938 5099 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574945 5099 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574952 5099 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574960 5099 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574968 5099 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574975 5099 feature_gate.go:328] unrecognized feature gate: Example2 Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574983 5099 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574992 5099 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.574999 5099 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.575006 5099 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.575013 5099 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.575020 5099 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.575027 5099 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.575035 5099 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.575041 5099 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.575048 5099 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.575056 5099 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.575063 5099 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.575070 5099 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.575076 5099 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.575085 5099 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.575092 5099 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.575100 5099 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.575107 5099 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.575114 5099 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.575121 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.575129 5099 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.575280 5099 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.575336 5099 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.575343 5099 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.575349 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.575354 5099 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.575360 5099 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.575365 5099 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.575385 5099 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.575392 5099 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.575397 5099 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.575401 5099 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.575408 5099 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.575412 5099 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576247 5099 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576258 5099 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576263 5099 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576268 5099 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576273 5099 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576277 5099 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576282 5099 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576286 5099 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576291 5099 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576295 5099 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576299 5099 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576304 5099 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576309 5099 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576313 5099 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576317 5099 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576322 5099 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576326 5099 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576331 5099 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576335 5099 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576341 5099 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576347 5099 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576353 5099 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576358 5099 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576364 5099 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576371 5099 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576377 5099 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576383 5099 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576389 5099 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576394 5099 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576400 5099 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576406 5099 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576412 5099 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576418 5099 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576424 5099 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576430 5099 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576435 5099 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576444 5099 feature_gate.go:328] unrecognized feature gate: Example Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576450 5099 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576455 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576459 5099 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576464 5099 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576468 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576474 5099 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576480 5099 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576484 5099 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576489 5099 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576493 5099 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576497 5099 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576502 5099 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576507 5099 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576511 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576516 5099 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576521 5099 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576526 5099 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576531 5099 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576535 5099 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576541 5099 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576547 5099 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576552 5099 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576557 5099 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576561 5099 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576565 5099 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576570 5099 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576574 5099 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576579 5099 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576583 5099 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576587 5099 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576593 5099 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576597 5099 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576602 5099 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576608 5099 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576612 5099 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576617 5099 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576621 5099 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576627 5099 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576631 5099 feature_gate.go:328] unrecognized feature gate: Example2 Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576637 5099 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576645 5099 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576650 5099 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576655 5099 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576660 5099 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576665 5099 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576670 5099 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576674 5099 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576679 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.576690 5099 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577016 5099 flags.go:64] FLAG: --address="0.0.0.0" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577032 5099 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577050 5099 flags.go:64] FLAG: --anonymous-auth="true" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577057 5099 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577065 5099 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577070 5099 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577077 5099 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577085 5099 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577091 5099 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577096 5099 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577102 5099 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577108 5099 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577113 5099 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577144 5099 flags.go:64] FLAG: --cgroup-root="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577149 5099 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577155 5099 flags.go:64] FLAG: --client-ca-file="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577176 5099 flags.go:64] FLAG: --cloud-config="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577182 5099 flags.go:64] FLAG: --cloud-provider="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577186 5099 flags.go:64] FLAG: --cluster-dns="[]" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577197 5099 flags.go:64] FLAG: --cluster-domain="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577204 5099 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577209 5099 flags.go:64] FLAG: --config-dir="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577214 5099 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577220 5099 flags.go:64] FLAG: --container-log-max-files="5" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577226 5099 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577232 5099 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577237 5099 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577242 5099 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577248 5099 flags.go:64] FLAG: --contention-profiling="false" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577253 5099 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577258 5099 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577263 5099 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577276 5099 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577283 5099 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577288 5099 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577293 5099 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577298 5099 flags.go:64] FLAG: --enable-load-reader="false" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577303 5099 flags.go:64] FLAG: --enable-server="true" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577308 5099 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577316 5099 flags.go:64] FLAG: --event-burst="100" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577321 5099 flags.go:64] FLAG: --event-qps="50" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577326 5099 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577331 5099 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577336 5099 flags.go:64] FLAG: --eviction-hard="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577343 5099 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577348 5099 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577353 5099 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577358 5099 flags.go:64] FLAG: --eviction-soft="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577364 5099 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577371 5099 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577376 5099 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577382 5099 flags.go:64] FLAG: --experimental-mounter-path="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577388 5099 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577393 5099 flags.go:64] FLAG: --fail-swap-on="true" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577398 5099 flags.go:64] FLAG: --feature-gates="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577404 5099 flags.go:64] FLAG: --file-check-frequency="20s" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577410 5099 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577415 5099 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577420 5099 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577425 5099 flags.go:64] FLAG: --healthz-port="10248" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577430 5099 flags.go:64] FLAG: --help="false" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577436 5099 flags.go:64] FLAG: --hostname-override="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577441 5099 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577446 5099 flags.go:64] FLAG: --http-check-frequency="20s" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577451 5099 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577456 5099 flags.go:64] FLAG: --image-credential-provider-config="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577461 5099 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577467 5099 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577471 5099 flags.go:64] FLAG: --image-service-endpoint="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577476 5099 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577481 5099 flags.go:64] FLAG: --kube-api-burst="100" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577486 5099 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577492 5099 flags.go:64] FLAG: --kube-api-qps="50" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577498 5099 flags.go:64] FLAG: --kube-reserved="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577503 5099 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577508 5099 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577514 5099 flags.go:64] FLAG: --kubelet-cgroups="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577519 5099 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577524 5099 flags.go:64] FLAG: --lock-file="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577529 5099 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577534 5099 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577539 5099 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577548 5099 flags.go:64] FLAG: --log-json-split-stream="false" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577553 5099 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577559 5099 flags.go:64] FLAG: --log-text-split-stream="false" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577564 5099 flags.go:64] FLAG: --logging-format="text" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577569 5099 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577575 5099 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577580 5099 flags.go:64] FLAG: --manifest-url="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577585 5099 flags.go:64] FLAG: --manifest-url-header="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577592 5099 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577597 5099 flags.go:64] FLAG: --max-open-files="1000000" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577604 5099 flags.go:64] FLAG: --max-pods="110" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577609 5099 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577614 5099 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577619 5099 flags.go:64] FLAG: --memory-manager-policy="None" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577624 5099 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577629 5099 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577635 5099 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577640 5099 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577655 5099 flags.go:64] FLAG: --node-status-max-images="50" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577660 5099 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577666 5099 flags.go:64] FLAG: --oom-score-adj="-999" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577671 5099 flags.go:64] FLAG: --pod-cidr="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577676 5099 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577684 5099 flags.go:64] FLAG: --pod-manifest-path="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577689 5099 flags.go:64] FLAG: --pod-max-pids="-1" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577694 5099 flags.go:64] FLAG: --pods-per-core="0" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577699 5099 flags.go:64] FLAG: --port="10250" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577704 5099 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577709 5099 flags.go:64] FLAG: --provider-id="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577714 5099 flags.go:64] FLAG: --qos-reserved="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577719 5099 flags.go:64] FLAG: --read-only-port="10255" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577724 5099 flags.go:64] FLAG: --register-node="true" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577730 5099 flags.go:64] FLAG: --register-schedulable="true" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577735 5099 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577745 5099 flags.go:64] FLAG: --registry-burst="10" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577750 5099 flags.go:64] FLAG: --registry-qps="5" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577755 5099 flags.go:64] FLAG: --reserved-cpus="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577760 5099 flags.go:64] FLAG: --reserved-memory="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577766 5099 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577771 5099 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577776 5099 flags.go:64] FLAG: --rotate-certificates="false" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577781 5099 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577786 5099 flags.go:64] FLAG: --runonce="false" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577790 5099 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577796 5099 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577801 5099 flags.go:64] FLAG: --seccomp-default="false" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577806 5099 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577811 5099 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577816 5099 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577823 5099 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577829 5099 flags.go:64] FLAG: --storage-driver-password="root" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577834 5099 flags.go:64] FLAG: --storage-driver-secure="false" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577839 5099 flags.go:64] FLAG: --storage-driver-table="stats" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577844 5099 flags.go:64] FLAG: --storage-driver-user="root" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577849 5099 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577854 5099 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577859 5099 flags.go:64] FLAG: --system-cgroups="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577864 5099 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577873 5099 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577878 5099 flags.go:64] FLAG: --tls-cert-file="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577883 5099 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577889 5099 flags.go:64] FLAG: --tls-min-version="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577894 5099 flags.go:64] FLAG: --tls-private-key-file="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577899 5099 flags.go:64] FLAG: --topology-manager-policy="none" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577904 5099 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577909 5099 flags.go:64] FLAG: --topology-manager-scope="container" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577914 5099 flags.go:64] FLAG: --v="2" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577921 5099 flags.go:64] FLAG: --version="false" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577929 5099 flags.go:64] FLAG: --vmodule="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577937 5099 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.577942 5099 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578089 5099 feature_gate.go:328] unrecognized feature gate: Example2 Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578096 5099 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578102 5099 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578107 5099 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578112 5099 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578117 5099 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578122 5099 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578127 5099 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578133 5099 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578138 5099 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578143 5099 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578148 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578153 5099 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578158 5099 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578179 5099 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578184 5099 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578188 5099 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578193 5099 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578198 5099 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578202 5099 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578206 5099 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578211 5099 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578215 5099 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578220 5099 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578225 5099 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578229 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578233 5099 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578238 5099 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578247 5099 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578252 5099 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578256 5099 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578261 5099 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578265 5099 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578272 5099 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578277 5099 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578282 5099 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578287 5099 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578291 5099 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578297 5099 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578301 5099 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578309 5099 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578313 5099 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578318 5099 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578322 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578327 5099 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578332 5099 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578336 5099 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578341 5099 feature_gate.go:328] unrecognized feature gate: Example Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578345 5099 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578350 5099 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578354 5099 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578358 5099 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578363 5099 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578368 5099 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578375 5099 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578381 5099 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578386 5099 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578390 5099 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578395 5099 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578400 5099 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578407 5099 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578411 5099 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578416 5099 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578421 5099 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578425 5099 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578430 5099 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578434 5099 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578438 5099 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578444 5099 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578449 5099 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578454 5099 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578459 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578467 5099 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578473 5099 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578478 5099 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578484 5099 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578489 5099 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578496 5099 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578502 5099 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578507 5099 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578513 5099 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578518 5099 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578523 5099 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578529 5099 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578534 5099 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.578540 5099 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.578782 5099 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.588616 5099 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.589010 5099 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589102 5099 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589153 5099 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589190 5099 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589200 5099 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589207 5099 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589215 5099 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589223 5099 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589230 5099 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589237 5099 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589244 5099 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589252 5099 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589259 5099 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589266 5099 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589273 5099 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589280 5099 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589288 5099 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589295 5099 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589301 5099 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589309 5099 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589315 5099 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589323 5099 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589331 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589338 5099 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589345 5099 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589352 5099 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589360 5099 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589367 5099 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589375 5099 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589382 5099 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589391 5099 feature_gate.go:328] unrecognized feature gate: Example2 Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589399 5099 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589408 5099 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589416 5099 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589426 5099 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589436 5099 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589443 5099 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589451 5099 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589458 5099 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589465 5099 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589472 5099 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589479 5099 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589486 5099 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589494 5099 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589501 5099 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589508 5099 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589515 5099 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589522 5099 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589529 5099 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589537 5099 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589544 5099 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589551 5099 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589558 5099 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589565 5099 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589573 5099 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589580 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589587 5099 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589596 5099 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589604 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589611 5099 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589618 5099 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589625 5099 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589632 5099 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589640 5099 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589647 5099 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589655 5099 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589663 5099 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589670 5099 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589680 5099 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589689 5099 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589697 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589704 5099 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589711 5099 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589718 5099 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589726 5099 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589733 5099 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589740 5099 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589747 5099 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589754 5099 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589762 5099 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589946 5099 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589960 5099 feature_gate.go:328] unrecognized feature gate: Example Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589969 5099 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589977 5099 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.589985 5099 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.590460 5099 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.590472 5099 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.590487 5099 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.590918 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.590931 5099 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.590947 5099 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.590955 5099 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.590964 5099 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.590971 5099 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.590979 5099 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.590987 5099 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.590995 5099 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591002 5099 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591012 5099 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591020 5099 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591029 5099 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591036 5099 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591043 5099 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591057 5099 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591065 5099 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591072 5099 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591080 5099 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591087 5099 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591095 5099 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591102 5099 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591110 5099 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591117 5099 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591125 5099 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591132 5099 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591140 5099 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591148 5099 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591189 5099 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591198 5099 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591210 5099 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591220 5099 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591229 5099 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591237 5099 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591244 5099 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591252 5099 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591260 5099 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591267 5099 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591275 5099 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591282 5099 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591296 5099 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591303 5099 feature_gate.go:328] unrecognized feature gate: Example2 Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591311 5099 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591321 5099 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591328 5099 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591336 5099 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591343 5099 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591351 5099 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591360 5099 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591367 5099 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591375 5099 feature_gate.go:328] unrecognized feature gate: Example Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591383 5099 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591391 5099 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591403 5099 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591410 5099 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591418 5099 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591426 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591433 5099 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591442 5099 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591450 5099 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591458 5099 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591466 5099 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591473 5099 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591481 5099 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591489 5099 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591501 5099 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591508 5099 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591515 5099 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591522 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591533 5099 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591598 5099 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591700 5099 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591710 5099 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591717 5099 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591721 5099 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.591725 5099 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.592016 5099 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.592029 5099 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.592033 5099 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.592039 5099 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.592043 5099 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.592048 5099 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.592054 5099 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.592058 5099 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.592061 5099 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 22 14:14:02 crc kubenswrapper[5099]: W0122 14:14:02.592065 5099 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.592075 5099 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.592341 5099 server.go:962] "Client rotation is on, will bootstrap in background" Jan 22 14:14:02 crc kubenswrapper[5099]: E0122 14:14:02.594567 5099 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.597731 5099 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.597836 5099 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.598302 5099 server.go:1019] "Starting client certificate rotation" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.598491 5099 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.598569 5099 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.611710 5099 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 14:14:02 crc kubenswrapper[5099]: E0122 14:14:02.613416 5099 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.613438 5099 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.623306 5099 log.go:25] "Validated CRI v1 runtime API" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.646226 5099 log.go:25] "Validated CRI v1 image API" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.647684 5099 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.652847 5099 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2026-01-22-14-07-46-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.652879 5099 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:46 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.669438 5099 manager.go:217] Machine: {Timestamp:2026-01-22 14:14:02.666996082 +0000 UTC m=+0.374746339 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33649922048 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:80005360-1d39-4f7f-b08e-11268098b583 BootID:ae36316e-5d25-4478-9e3d-172dd4f263b5 Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824963072 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:46 Capacity:1073741824 Type:vfs Inodes:4107656 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824958976 Type:vfs Inodes:4107656 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:63:25:24 Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:63:25:24 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:85:d4:34 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:78:d8:99 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:6b:6f:b0 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:f2:1d:6d Speed:-1 Mtu:1496} {Name:eth10 MacAddress:26:35:67:5d:9a:dc Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:56:89:f1:52:ee:20 Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649922048 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.669706 5099 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.669857 5099 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.672094 5099 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.672139 5099 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.672319 5099 topology_manager.go:138] "Creating topology manager with none policy" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.672328 5099 container_manager_linux.go:306] "Creating device plugin manager" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.672349 5099 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.672368 5099 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.672741 5099 state_mem.go:36] "Initialized new in-memory state store" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.672895 5099 server.go:1267] "Using root directory" path="/var/lib/kubelet" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.673592 5099 kubelet.go:491] "Attempting to sync node with API server" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.673614 5099 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.673629 5099 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.673643 5099 kubelet.go:397] "Adding apiserver pod source" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.673661 5099 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.675308 5099 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.675325 5099 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.676216 5099 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.676227 5099 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.677951 5099 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.678225 5099 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.678587 5099 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.678938 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.678962 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.678969 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.678979 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.678985 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.678992 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 22 14:14:02 crc kubenswrapper[5099]: E0122 14:14:02.678977 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.679005 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 22 14:14:02 crc kubenswrapper[5099]: E0122 14:14:02.679037 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.679069 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.679135 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.679183 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.679210 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.679451 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.679932 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.679951 5099 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.681802 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.694701 5099 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.694791 5099 server.go:1295] "Started kubelet" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.694952 5099 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.695081 5099 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.695148 5099 server_v1.go:47] "podresources" method="list" useActivePods=true Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.695714 5099 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 22 14:14:02 crc systemd[1]: Started Kubernetes Kubelet. Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.700348 5099 server.go:317] "Adding debug handlers to kubelet server" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.700627 5099 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.701607 5099 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Jan 22 14:14:02 crc kubenswrapper[5099]: E0122 14:14:02.701644 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.701877 5099 volume_manager.go:295] "The desired_state_of_world populator starts" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.701934 5099 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 22 14:14:02 crc kubenswrapper[5099]: E0122 14:14:02.698603 5099 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.163:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188d131cb8131cd1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:02.694745297 +0000 UTC m=+0.402495544,LastTimestamp:2026-01-22 14:14:02.694745297 +0000 UTC m=+0.402495544,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:02 crc kubenswrapper[5099]: E0122 14:14:02.703794 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="200ms" Jan 22 14:14:02 crc kubenswrapper[5099]: E0122 14:14:02.705573 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.702241 5099 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.705925 5099 factory.go:55] Registering systemd factory Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.705976 5099 factory.go:223] Registration of the systemd container factory successfully Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.707911 5099 factory.go:153] Registering CRI-O factory Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.707942 5099 factory.go:223] Registration of the crio container factory successfully Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.708016 5099 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.708053 5099 factory.go:103] Registering Raw factory Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.708071 5099 manager.go:1196] Started watching for new ooms in manager Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.710487 5099 manager.go:319] Starting recovery of all containers Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.737975 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.738088 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.738145 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.738250 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.738307 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.738374 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.738436 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.738489 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.738548 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.738604 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.738661 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.738715 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.738780 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.738849 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.738929 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.738990 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.739048 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.739106 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.739172 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.739228 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.739282 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.739349 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.739445 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.739500 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.739551 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.739602 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.740470 5099 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.740579 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.740659 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.740739 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.740834 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.740913 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.740993 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.741396 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.741828 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.741923 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.741988 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.742041 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.742099 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.742225 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.742293 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.742373 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.742460 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.742539 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.742598 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.742651 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.742700 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.742758 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.742816 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.742867 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.742929 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.742992 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.743045 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.743096 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.743158 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.743232 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.743287 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.743344 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.743401 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.743476 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.743538 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.743593 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.743645 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.743709 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.743823 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.743878 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.743949 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.744007 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.744061 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.744119 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.744493 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.744555 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.744631 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.744694 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.744759 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.744828 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.744919 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.743886 5099 manager.go:324] Recovery completed Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.744995 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.745177 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.745256 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.745454 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.745523 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.745577 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.745629 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.745685 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.745741 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.745799 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.745851 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.745912 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.745971 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.746024 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.746086 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.746147 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.746286 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.746338 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.746392 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.746443 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.746504 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.746561 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.746612 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.746663 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.746713 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.746769 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.746821 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.746871 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.746941 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.747010 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.747073 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.747128 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.747197 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.747257 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.747310 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.747471 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.747565 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.748002 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.748068 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.748123 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.748235 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.748298 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.748355 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.748408 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.748508 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.748564 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.748614 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.748669 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.748719 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.748778 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.748837 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.748899 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.748960 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.749012 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.749140 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.749213 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.749277 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.749330 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.749382 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.749442 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.749529 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.749607 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.749675 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.749735 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.749795 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.749848 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.750108 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.750185 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.750264 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.750382 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.750439 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.750593 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.750654 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.750708 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.750760 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.750825 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.750947 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.751004 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.751083 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.751144 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.751229 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.751286 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.751340 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.751466 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.751526 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.752103 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.752151 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.752197 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.752225 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.752281 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.752311 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.752325 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.752346 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.752363 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.752382 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.752400 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.752420 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.752436 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.752449 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.752468 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.752483 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.752499 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.752513 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.752533 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.752548 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.752560 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.752577 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.752711 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.752738 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.752752 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.752772 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.752785 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.752799 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.752818 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.752832 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.752854 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.752875 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.752893 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.752913 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.752926 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.752944 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.752960 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.752982 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.752996 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753012 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753033 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753077 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753104 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753116 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753136 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753151 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753203 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753224 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753240 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753263 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753280 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753302 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753317 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753330 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753351 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753364 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753386 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753399 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753414 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753433 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753447 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753465 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753481 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753573 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753594 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753609 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753621 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753639 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753653 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753671 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753685 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753702 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753717 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753731 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753752 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753764 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753782 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753797 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753810 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753830 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753843 5099 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753856 5099 reconstruct.go:97] "Volume reconstruction finished" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.753864 5099 reconciler.go:26] "Reconciler: start to sync state" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.757831 5099 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.759813 5099 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.759878 5099 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.759908 5099 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.759971 5099 kubelet.go:2451] "Starting kubelet main sync loop" Jan 22 14:14:02 crc kubenswrapper[5099]: E0122 14:14:02.760010 5099 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.763198 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:02 crc kubenswrapper[5099]: E0122 14:14:02.763544 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.764813 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.764854 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.764868 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.765650 5099 cpu_manager.go:222] "Starting CPU manager" policy="none" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.765665 5099 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.765682 5099 state_mem.go:36] "Initialized new in-memory state store" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.768728 5099 policy_none.go:49] "None policy: Start" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.768759 5099 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.768773 5099 state_mem.go:35] "Initializing new in-memory state store" Jan 22 14:14:02 crc kubenswrapper[5099]: E0122 14:14:02.802463 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.809017 5099 manager.go:341] "Starting Device Plugin manager" Jan 22 14:14:02 crc kubenswrapper[5099]: E0122 14:14:02.809294 5099 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.809319 5099 server.go:85] "Starting device plugin registration server" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.809893 5099 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.809915 5099 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.810428 5099 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.810533 5099 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.810568 5099 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 22 14:14:02 crc kubenswrapper[5099]: E0122 14:14:02.815181 5099 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Jan 22 14:14:02 crc kubenswrapper[5099]: E0122 14:14:02.815332 5099 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.861124 5099 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.861485 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.863264 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.863306 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.863316 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.863940 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.864123 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.864229 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.864580 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.864601 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.864611 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.864884 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.864983 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.865073 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.865128 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.865511 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.865552 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.865566 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.865575 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.865587 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.866106 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.866235 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.866254 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.866263 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.866385 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.866424 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.866844 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.866867 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.866875 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.867387 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.867466 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.867492 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.867806 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.867829 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.867838 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.867848 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.867865 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.867875 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.868518 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.868586 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.868612 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.868620 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.868720 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.869399 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.869418 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.869455 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:02 crc kubenswrapper[5099]: E0122 14:14:02.892418 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:02 crc kubenswrapper[5099]: E0122 14:14:02.904761 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="400ms" Jan 22 14:14:02 crc kubenswrapper[5099]: E0122 14:14:02.908049 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.910186 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.910927 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.910964 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.910978 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.911002 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 14:14:02 crc kubenswrapper[5099]: E0122 14:14:02.911511 5099 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.163:6443: connect: connection refused" node="crc" Jan 22 14:14:02 crc kubenswrapper[5099]: E0122 14:14:02.931538 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.957041 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.957326 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.957536 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.957631 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.957716 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.957841 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.957942 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.958031 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.958127 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.958226 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.958313 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.957876 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.958423 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.958510 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.958533 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.958551 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.958569 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.958593 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.958601 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.958635 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.958743 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.958830 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.958846 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.958863 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.958895 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.958909 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.958922 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.959132 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.959284 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 14:14:02 crc kubenswrapper[5099]: I0122 14:14:02.959402 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:14:02 crc kubenswrapper[5099]: E0122 14:14:02.961597 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:02 crc kubenswrapper[5099]: E0122 14:14:02.969674 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.060255 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.060311 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.060332 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.060385 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.060411 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.060432 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.060457 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.060483 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.060493 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.060509 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.060619 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.060649 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.060658 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.060691 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.060714 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.060748 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.060773 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.060798 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.060828 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.060840 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.060859 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.060574 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.060892 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.060965 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.060967 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.061000 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.060723 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.061043 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.061070 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.061079 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.061082 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.060750 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.112326 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.113533 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.113565 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.113574 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.113593 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 14:14:03 crc kubenswrapper[5099]: E0122 14:14:03.113965 5099 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.163:6443: connect: connection refused" node="crc" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.193113 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.209056 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.232605 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.262504 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.270403 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 14:14:03 crc kubenswrapper[5099]: E0122 14:14:03.306381 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="800ms" Jan 22 14:14:03 crc kubenswrapper[5099]: W0122 14:14:03.397427 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-70b7423ef1517453c70bed7dd26df74d4b0abd1bd280af7aa57e51b2263adfed WatchSource:0}: Error finding container 70b7423ef1517453c70bed7dd26df74d4b0abd1bd280af7aa57e51b2263adfed: Status 404 returned error can't find the container with id 70b7423ef1517453c70bed7dd26df74d4b0abd1bd280af7aa57e51b2263adfed Jan 22 14:14:03 crc kubenswrapper[5099]: W0122 14:14:03.397754 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b638b8f4bb0070e40528db779baf6a2.slice/crio-b13d027ca95241ee4c15bd464e4803f1fdb41e6a3a537a8137c4bf235b8725d9 WatchSource:0}: Error finding container b13d027ca95241ee4c15bd464e4803f1fdb41e6a3a537a8137c4bf235b8725d9: Status 404 returned error can't find the container with id b13d027ca95241ee4c15bd464e4803f1fdb41e6a3a537a8137c4bf235b8725d9 Jan 22 14:14:03 crc kubenswrapper[5099]: W0122 14:14:03.400194 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-4c0c5aea7e6fa79c0b5b9c309234e8c35fafb6ac887499cdcbef145ce1ea486a WatchSource:0}: Error finding container 4c0c5aea7e6fa79c0b5b9c309234e8c35fafb6ac887499cdcbef145ce1ea486a: Status 404 returned error can't find the container with id 4c0c5aea7e6fa79c0b5b9c309234e8c35fafb6ac887499cdcbef145ce1ea486a Jan 22 14:14:03 crc kubenswrapper[5099]: W0122 14:14:03.400502 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-81d89edd158a91e83942665bf71e41546ebbfb8f0278009ecac1b64a5bd6ec69 WatchSource:0}: Error finding container 81d89edd158a91e83942665bf71e41546ebbfb8f0278009ecac1b64a5bd6ec69: Status 404 returned error can't find the container with id 81d89edd158a91e83942665bf71e41546ebbfb8f0278009ecac1b64a5bd6ec69 Jan 22 14:14:03 crc kubenswrapper[5099]: W0122 14:14:03.400872 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-5a818114d5f67678f19f5e3938836cf3e5926d33b04c56078dea9b8ea2268d51 WatchSource:0}: Error finding container 5a818114d5f67678f19f5e3938836cf3e5926d33b04c56078dea9b8ea2268d51: Status 404 returned error can't find the container with id 5a818114d5f67678f19f5e3938836cf3e5926d33b04c56078dea9b8ea2268d51 Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.401462 5099 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.515067 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.515985 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.516032 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.516046 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.516071 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 14:14:03 crc kubenswrapper[5099]: E0122 14:14:03.516717 5099 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.163:6443: connect: connection refused" node="crc" Jan 22 14:14:03 crc kubenswrapper[5099]: E0122 14:14:03.552668 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 22 14:14:03 crc kubenswrapper[5099]: E0122 14:14:03.664349 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.683155 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.777435 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"b13d027ca95241ee4c15bd464e4803f1fdb41e6a3a537a8137c4bf235b8725d9"} Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.778479 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"4c0c5aea7e6fa79c0b5b9c309234e8c35fafb6ac887499cdcbef145ce1ea486a"} Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.779274 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"81d89edd158a91e83942665bf71e41546ebbfb8f0278009ecac1b64a5bd6ec69"} Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.779900 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"5a818114d5f67678f19f5e3938836cf3e5926d33b04c56078dea9b8ea2268d51"} Jan 22 14:14:03 crc kubenswrapper[5099]: I0122 14:14:03.780599 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"70b7423ef1517453c70bed7dd26df74d4b0abd1bd280af7aa57e51b2263adfed"} Jan 22 14:14:03 crc kubenswrapper[5099]: E0122 14:14:03.811796 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 22 14:14:04 crc kubenswrapper[5099]: E0122 14:14:04.087663 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 22 14:14:04 crc kubenswrapper[5099]: E0122 14:14:04.107326 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="1.6s" Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.317355 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.318617 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.318677 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.318693 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.318726 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 14:14:04 crc kubenswrapper[5099]: E0122 14:14:04.319404 5099 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.163:6443: connect: connection refused" node="crc" Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.683201 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.783430 5099 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="30f23a7da44acb5a6c4e3efee3ea14b7fb95d8d869cfce920d96254e2845d374" exitCode=0 Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.783609 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.783798 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"30f23a7da44acb5a6c4e3efee3ea14b7fb95d8d869cfce920d96254e2845d374"} Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.784070 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.784101 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.784111 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:04 crc kubenswrapper[5099]: E0122 14:14:04.784320 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.784757 5099 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="f88404a827bc7cfe0c884579fd832677e18d8f7599b980b81718d7f623c49879" exitCode=0 Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.784793 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"f88404a827bc7cfe0c884579fd832677e18d8f7599b980b81718d7f623c49879"} Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.784898 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.785407 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.785472 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.785499 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:04 crc kubenswrapper[5099]: E0122 14:14:04.785859 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.786146 5099 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="1197f96522c23c825f1d22265693ca0f6cdb3422caddc05dc5949c463e83bc10" exitCode=0 Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.786273 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.786291 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"1197f96522c23c825f1d22265693ca0f6cdb3422caddc05dc5949c463e83bc10"} Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.786563 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.786580 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.786588 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:04 crc kubenswrapper[5099]: E0122 14:14:04.786704 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.791019 5099 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="516978eba8fe79e87c1d0bcd72dca39f8671f5f446bc1ef52787527b28e698dd" exitCode=0 Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.791129 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"516978eba8fe79e87c1d0bcd72dca39f8671f5f446bc1ef52787527b28e698dd"} Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.791211 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.791567 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.791599 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.791661 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:04 crc kubenswrapper[5099]: E0122 14:14:04.791829 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.800140 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.801298 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.801325 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.801358 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.801385 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"c6a3a228fdd0a16ad075639d991808c4c1b385622b2989fe2232ea1e51504a96"} Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.801430 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"6042547e3c27102c016e9cc5bf795c6f38820018f8ebf572bb19c1802c91f35e"} Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.801442 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"95db530cd8291c29532a646c4d1cb47fe229d5c70e62f41fd31bfcae36643391"} Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.801453 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"acd69d42784e648b84b99c1cced0501b3fc34e0e8d0fa85436cc27014ab88d80"} Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.801626 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.802456 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.802495 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.802508 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:04 crc kubenswrapper[5099]: E0122 14:14:04.802737 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:04 crc kubenswrapper[5099]: I0122 14:14:04.812658 5099 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 22 14:14:04 crc kubenswrapper[5099]: E0122 14:14:04.813809 5099 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 22 14:14:05 crc kubenswrapper[5099]: E0122 14:14:05.464183 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 22 14:14:05 crc kubenswrapper[5099]: I0122 14:14:05.682617 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Jan 22 14:14:05 crc kubenswrapper[5099]: E0122 14:14:05.703404 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 22 14:14:05 crc kubenswrapper[5099]: E0122 14:14:05.708132 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="3.2s" Jan 22 14:14:05 crc kubenswrapper[5099]: I0122 14:14:05.808463 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"7c8458f5168dc84469986054180e4aa5b1dd5fd8b8fcdc76850d8d82d900f3ce"} Jan 22 14:14:05 crc kubenswrapper[5099]: I0122 14:14:05.808560 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"84c4d8685a7f253f25ece1f33240855b9460afdd4def7b3037033ef3bcbf4fa1"} Jan 22 14:14:05 crc kubenswrapper[5099]: I0122 14:14:05.808575 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"4af6e790f0b67146252ea7d1dc240b875299d9697f1271a8da4bba5bbdbd3eb3"} Jan 22 14:14:05 crc kubenswrapper[5099]: I0122 14:14:05.808725 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:05 crc kubenswrapper[5099]: I0122 14:14:05.813417 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:05 crc kubenswrapper[5099]: I0122 14:14:05.813470 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:05 crc kubenswrapper[5099]: I0122 14:14:05.813485 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:05 crc kubenswrapper[5099]: E0122 14:14:05.813784 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:05 crc kubenswrapper[5099]: I0122 14:14:05.815562 5099 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="909187eb0673067de5d3de63ca81410678dc81af9a1907ae01dfdc426e765e6e" exitCode=0 Jan 22 14:14:05 crc kubenswrapper[5099]: I0122 14:14:05.815618 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"909187eb0673067de5d3de63ca81410678dc81af9a1907ae01dfdc426e765e6e"} Jan 22 14:14:05 crc kubenswrapper[5099]: I0122 14:14:05.815809 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:05 crc kubenswrapper[5099]: I0122 14:14:05.816447 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:05 crc kubenswrapper[5099]: I0122 14:14:05.816467 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:05 crc kubenswrapper[5099]: I0122 14:14:05.816477 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:05 crc kubenswrapper[5099]: E0122 14:14:05.816630 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:05 crc kubenswrapper[5099]: I0122 14:14:05.819188 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"316c373a4bdd22b1ed9faf03ed0a93cbbdb81ab6e410df5243898da1c7d1be3b"} Jan 22 14:14:05 crc kubenswrapper[5099]: I0122 14:14:05.819282 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:05 crc kubenswrapper[5099]: I0122 14:14:05.820622 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:05 crc kubenswrapper[5099]: I0122 14:14:05.820642 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:05 crc kubenswrapper[5099]: I0122 14:14:05.820653 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:05 crc kubenswrapper[5099]: E0122 14:14:05.820786 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:05 crc kubenswrapper[5099]: I0122 14:14:05.828736 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:05 crc kubenswrapper[5099]: I0122 14:14:05.828930 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"d9b88a4b7534e1edf736c237a20e009b50a64a78928e2fd583b9ba86400b6452"} Jan 22 14:14:05 crc kubenswrapper[5099]: I0122 14:14:05.828955 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"b7376664c839f44962e7ba0e53f1895d1e99866d768aa568d31816e23f3959b1"} Jan 22 14:14:05 crc kubenswrapper[5099]: I0122 14:14:05.828968 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"57d66dcb61882a772119bb25e7d1b4b49ea5f91bdbc8a2daf7f3301b0eeb4d3e"} Jan 22 14:14:05 crc kubenswrapper[5099]: I0122 14:14:05.828979 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"6adc2225720223e57146af518d38a71d01a5a7d3c93cb4c4b55e20bf0cebbff0"} Jan 22 14:14:05 crc kubenswrapper[5099]: I0122 14:14:05.829399 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:05 crc kubenswrapper[5099]: I0122 14:14:05.829424 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:05 crc kubenswrapper[5099]: I0122 14:14:05.829435 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:05 crc kubenswrapper[5099]: E0122 14:14:05.829707 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:05 crc kubenswrapper[5099]: I0122 14:14:05.920438 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:05 crc kubenswrapper[5099]: I0122 14:14:05.921645 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:05 crc kubenswrapper[5099]: I0122 14:14:05.921695 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:05 crc kubenswrapper[5099]: I0122 14:14:05.921709 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:05 crc kubenswrapper[5099]: I0122 14:14:05.921739 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 14:14:06 crc kubenswrapper[5099]: I0122 14:14:06.833643 5099 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="fea88bccab57f098d19c9924811d811f906ba97614fa1a739e94297be1920318" exitCode=0 Jan 22 14:14:06 crc kubenswrapper[5099]: I0122 14:14:06.833776 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"fea88bccab57f098d19c9924811d811f906ba97614fa1a739e94297be1920318"} Jan 22 14:14:06 crc kubenswrapper[5099]: I0122 14:14:06.833887 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:06 crc kubenswrapper[5099]: I0122 14:14:06.834976 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:06 crc kubenswrapper[5099]: I0122 14:14:06.835072 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:06 crc kubenswrapper[5099]: I0122 14:14:06.835152 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:06 crc kubenswrapper[5099]: E0122 14:14:06.835547 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:06 crc kubenswrapper[5099]: I0122 14:14:06.839785 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"09d573200808005b83d28776eefefe413623d38a8cfa61a0852af58d9041f5c1"} Jan 22 14:14:06 crc kubenswrapper[5099]: I0122 14:14:06.840183 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:06 crc kubenswrapper[5099]: I0122 14:14:06.840215 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:06 crc kubenswrapper[5099]: I0122 14:14:06.840793 5099 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 14:14:06 crc kubenswrapper[5099]: I0122 14:14:06.840850 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:06 crc kubenswrapper[5099]: I0122 14:14:06.840929 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:06 crc kubenswrapper[5099]: I0122 14:14:06.840971 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:06 crc kubenswrapper[5099]: I0122 14:14:06.840985 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:06 crc kubenswrapper[5099]: E0122 14:14:06.841290 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:06 crc kubenswrapper[5099]: I0122 14:14:06.841565 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:06 crc kubenswrapper[5099]: I0122 14:14:06.841602 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:06 crc kubenswrapper[5099]: I0122 14:14:06.841613 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:06 crc kubenswrapper[5099]: I0122 14:14:06.841620 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:06 crc kubenswrapper[5099]: I0122 14:14:06.841651 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:06 crc kubenswrapper[5099]: I0122 14:14:06.841670 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:06 crc kubenswrapper[5099]: E0122 14:14:06.841862 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:06 crc kubenswrapper[5099]: E0122 14:14:06.842313 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:07 crc kubenswrapper[5099]: I0122 14:14:07.095733 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:14:07 crc kubenswrapper[5099]: I0122 14:14:07.847918 5099 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 14:14:07 crc kubenswrapper[5099]: I0122 14:14:07.848276 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:07 crc kubenswrapper[5099]: I0122 14:14:07.847908 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"1842e08221440a926087d33f9e5707e2c8f41cc90a17bc232a4a501a592d022a"} Jan 22 14:14:07 crc kubenswrapper[5099]: I0122 14:14:07.848433 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"79b20be9073348beaa847d2bbae7c58afb7cc229c4848dc7f609292a8d5f3ab4"} Jan 22 14:14:07 crc kubenswrapper[5099]: I0122 14:14:07.848476 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"35a73dbae0e4eaf6c2464e9037ae9074afb792e27c83f6a5cd93693b7d2531fc"} Jan 22 14:14:07 crc kubenswrapper[5099]: I0122 14:14:07.848501 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"9132cc0eb41136054564502e0c129a302572c8d7658acfdcab4362ae222e0e9d"} Jan 22 14:14:07 crc kubenswrapper[5099]: I0122 14:14:07.849660 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:07 crc kubenswrapper[5099]: I0122 14:14:07.849742 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:07 crc kubenswrapper[5099]: I0122 14:14:07.849771 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:07 crc kubenswrapper[5099]: E0122 14:14:07.850566 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:08 crc kubenswrapper[5099]: I0122 14:14:08.856592 5099 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 14:14:08 crc kubenswrapper[5099]: I0122 14:14:08.856652 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:08 crc kubenswrapper[5099]: I0122 14:14:08.856669 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:08 crc kubenswrapper[5099]: I0122 14:14:08.857428 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"077cefa4a49425bbb45c01346f8399426f94060bc6a1a68c577cad91459299c5"} Jan 22 14:14:08 crc kubenswrapper[5099]: I0122 14:14:08.857964 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:08 crc kubenswrapper[5099]: I0122 14:14:08.857986 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:08 crc kubenswrapper[5099]: I0122 14:14:08.857995 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:08 crc kubenswrapper[5099]: I0122 14:14:08.858002 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:08 crc kubenswrapper[5099]: I0122 14:14:08.858044 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:08 crc kubenswrapper[5099]: I0122 14:14:08.858067 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:08 crc kubenswrapper[5099]: E0122 14:14:08.858191 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:08 crc kubenswrapper[5099]: E0122 14:14:08.858709 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:09 crc kubenswrapper[5099]: I0122 14:14:09.081078 5099 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 22 14:14:09 crc kubenswrapper[5099]: I0122 14:14:09.778667 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:14:09 crc kubenswrapper[5099]: I0122 14:14:09.779084 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:09 crc kubenswrapper[5099]: I0122 14:14:09.780346 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:09 crc kubenswrapper[5099]: I0122 14:14:09.780410 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:09 crc kubenswrapper[5099]: I0122 14:14:09.780427 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:09 crc kubenswrapper[5099]: E0122 14:14:09.780920 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:09 crc kubenswrapper[5099]: I0122 14:14:09.859545 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:09 crc kubenswrapper[5099]: I0122 14:14:09.860464 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:09 crc kubenswrapper[5099]: I0122 14:14:09.860510 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:09 crc kubenswrapper[5099]: I0122 14:14:09.860528 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:09 crc kubenswrapper[5099]: E0122 14:14:09.861052 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:10 crc kubenswrapper[5099]: I0122 14:14:10.176273 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Jan 22 14:14:10 crc kubenswrapper[5099]: I0122 14:14:10.482480 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:14:10 crc kubenswrapper[5099]: I0122 14:14:10.482753 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:10 crc kubenswrapper[5099]: I0122 14:14:10.483840 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:10 crc kubenswrapper[5099]: I0122 14:14:10.484023 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:10 crc kubenswrapper[5099]: I0122 14:14:10.484055 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:10 crc kubenswrapper[5099]: E0122 14:14:10.484820 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:10 crc kubenswrapper[5099]: I0122 14:14:10.605358 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 22 14:14:10 crc kubenswrapper[5099]: I0122 14:14:10.862392 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:10 crc kubenswrapper[5099]: I0122 14:14:10.863310 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:10 crc kubenswrapper[5099]: I0122 14:14:10.863387 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:10 crc kubenswrapper[5099]: I0122 14:14:10.863408 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:10 crc kubenswrapper[5099]: E0122 14:14:10.864038 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:11 crc kubenswrapper[5099]: I0122 14:14:11.051566 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 14:14:11 crc kubenswrapper[5099]: I0122 14:14:11.051947 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:11 crc kubenswrapper[5099]: I0122 14:14:11.053125 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:11 crc kubenswrapper[5099]: I0122 14:14:11.053237 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:11 crc kubenswrapper[5099]: I0122 14:14:11.053255 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:11 crc kubenswrapper[5099]: E0122 14:14:11.053643 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:11 crc kubenswrapper[5099]: I0122 14:14:11.754035 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:14:11 crc kubenswrapper[5099]: I0122 14:14:11.754426 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:11 crc kubenswrapper[5099]: I0122 14:14:11.756084 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:11 crc kubenswrapper[5099]: I0122 14:14:11.756227 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:11 crc kubenswrapper[5099]: I0122 14:14:11.756256 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:11 crc kubenswrapper[5099]: E0122 14:14:11.757305 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:11 crc kubenswrapper[5099]: I0122 14:14:11.759209 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:14:11 crc kubenswrapper[5099]: I0122 14:14:11.839291 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:14:11 crc kubenswrapper[5099]: I0122 14:14:11.839634 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:11 crc kubenswrapper[5099]: I0122 14:14:11.840545 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:11 crc kubenswrapper[5099]: I0122 14:14:11.840592 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:11 crc kubenswrapper[5099]: I0122 14:14:11.840606 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:11 crc kubenswrapper[5099]: E0122 14:14:11.841098 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:11 crc kubenswrapper[5099]: I0122 14:14:11.865008 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:11 crc kubenswrapper[5099]: I0122 14:14:11.865074 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:14:11 crc kubenswrapper[5099]: I0122 14:14:11.865012 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:11 crc kubenswrapper[5099]: I0122 14:14:11.865813 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:11 crc kubenswrapper[5099]: I0122 14:14:11.865825 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:11 crc kubenswrapper[5099]: I0122 14:14:11.865903 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:11 crc kubenswrapper[5099]: I0122 14:14:11.865926 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:11 crc kubenswrapper[5099]: I0122 14:14:11.865860 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:11 crc kubenswrapper[5099]: I0122 14:14:11.866050 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:11 crc kubenswrapper[5099]: E0122 14:14:11.866582 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:11 crc kubenswrapper[5099]: E0122 14:14:11.867046 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:12 crc kubenswrapper[5099]: I0122 14:14:12.779668 5099 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded" start-of-body= Jan 22 14:14:12 crc kubenswrapper[5099]: I0122 14:14:12.779834 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded" Jan 22 14:14:12 crc kubenswrapper[5099]: E0122 14:14:12.815592 5099 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 14:14:12 crc kubenswrapper[5099]: I0122 14:14:12.868420 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:12 crc kubenswrapper[5099]: I0122 14:14:12.869606 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:12 crc kubenswrapper[5099]: I0122 14:14:12.869658 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:12 crc kubenswrapper[5099]: I0122 14:14:12.869672 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:12 crc kubenswrapper[5099]: E0122 14:14:12.870112 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:12 crc kubenswrapper[5099]: I0122 14:14:12.905318 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:14:13 crc kubenswrapper[5099]: I0122 14:14:13.872560 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:13 crc kubenswrapper[5099]: I0122 14:14:13.873474 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:13 crc kubenswrapper[5099]: I0122 14:14:13.873531 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:13 crc kubenswrapper[5099]: I0122 14:14:13.873548 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:13 crc kubenswrapper[5099]: E0122 14:14:13.874101 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:13 crc kubenswrapper[5099]: I0122 14:14:13.877379 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:14:14 crc kubenswrapper[5099]: I0122 14:14:14.874891 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:14 crc kubenswrapper[5099]: I0122 14:14:14.875666 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:14 crc kubenswrapper[5099]: I0122 14:14:14.875728 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:14 crc kubenswrapper[5099]: I0122 14:14:14.875740 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:14 crc kubenswrapper[5099]: E0122 14:14:14.876137 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:15 crc kubenswrapper[5099]: E0122 14:14:15.923509 5099 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 22 14:14:16 crc kubenswrapper[5099]: I0122 14:14:16.205475 5099 trace.go:236] Trace[1726453203]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 14:14:06.203) (total time: 10001ms): Jan 22 14:14:16 crc kubenswrapper[5099]: Trace[1726453203]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:14:16.205) Jan 22 14:14:16 crc kubenswrapper[5099]: Trace[1726453203]: [10.001966481s] [10.001966481s] END Jan 22 14:14:16 crc kubenswrapper[5099]: E0122 14:14:16.205549 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 22 14:14:16 crc kubenswrapper[5099]: I0122 14:14:16.242827 5099 trace.go:236] Trace[1830789697]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 14:14:06.241) (total time: 10001ms): Jan 22 14:14:16 crc kubenswrapper[5099]: Trace[1830789697]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:14:16.242) Jan 22 14:14:16 crc kubenswrapper[5099]: Trace[1830789697]: [10.00146837s] [10.00146837s] END Jan 22 14:14:16 crc kubenswrapper[5099]: E0122 14:14:16.242882 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 22 14:14:16 crc kubenswrapper[5099]: I0122 14:14:16.684259 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 22 14:14:17 crc kubenswrapper[5099]: I0122 14:14:17.096361 5099 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded" start-of-body= Jan 22 14:14:17 crc kubenswrapper[5099]: I0122 14:14:17.096463 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded" Jan 22 14:14:17 crc kubenswrapper[5099]: I0122 14:14:17.282746 5099 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 22 14:14:17 crc kubenswrapper[5099]: I0122 14:14:17.282860 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 22 14:14:18 crc kubenswrapper[5099]: E0122 14:14:18.909749 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 22 14:14:19 crc kubenswrapper[5099]: I0122 14:14:19.124972 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:19 crc kubenswrapper[5099]: I0122 14:14:19.126213 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:19 crc kubenswrapper[5099]: I0122 14:14:19.126322 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:19 crc kubenswrapper[5099]: I0122 14:14:19.126347 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:19 crc kubenswrapper[5099]: I0122 14:14:19.126390 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 14:14:19 crc kubenswrapper[5099]: E0122 14:14:19.140656 5099 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 22 14:14:19 crc kubenswrapper[5099]: E0122 14:14:19.823562 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 22 14:14:20 crc kubenswrapper[5099]: I0122 14:14:20.644316 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 22 14:14:20 crc kubenswrapper[5099]: I0122 14:14:20.645137 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:20 crc kubenswrapper[5099]: I0122 14:14:20.646209 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:20 crc kubenswrapper[5099]: I0122 14:14:20.646246 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:20 crc kubenswrapper[5099]: I0122 14:14:20.646261 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:20 crc kubenswrapper[5099]: E0122 14:14:20.646688 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:20 crc kubenswrapper[5099]: I0122 14:14:20.657883 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 22 14:14:20 crc kubenswrapper[5099]: I0122 14:14:20.896897 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:20 crc kubenswrapper[5099]: I0122 14:14:20.897980 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:20 crc kubenswrapper[5099]: I0122 14:14:20.898035 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:20 crc kubenswrapper[5099]: I0122 14:14:20.898050 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:20 crc kubenswrapper[5099]: E0122 14:14:20.898597 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:21 crc kubenswrapper[5099]: E0122 14:14:21.230945 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 22 14:14:22 crc kubenswrapper[5099]: I0122 14:14:22.102064 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:14:22 crc kubenswrapper[5099]: I0122 14:14:22.102931 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:22 crc kubenswrapper[5099]: I0122 14:14:22.103991 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:22 crc kubenswrapper[5099]: I0122 14:14:22.104049 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:22 crc kubenswrapper[5099]: I0122 14:14:22.104064 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.104502 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:22 crc kubenswrapper[5099]: I0122 14:14:22.107841 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:14:22 crc kubenswrapper[5099]: I0122 14:14:22.282292 5099 trace.go:236] Trace[1338905915]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 14:14:11.781) (total time: 10500ms): Jan 22 14:14:22 crc kubenswrapper[5099]: Trace[1338905915]: ---"Objects listed" error:nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope 10500ms (14:14:22.282) Jan 22 14:14:22 crc kubenswrapper[5099]: Trace[1338905915]: [10.50095456s] [10.50095456s] END Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.282343 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.282261 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d131cb8131cd1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:02.694745297 +0000 UTC m=+0.402495544,LastTimestamp:2026-01-22 14:14:02.694745297 +0000 UTC m=+0.402495544,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: I0122 14:14:22.282890 5099 trace.go:236] Trace[1074119269]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 14:14:09.123) (total time: 13159ms): Jan 22 14:14:22 crc kubenswrapper[5099]: Trace[1074119269]: ---"Objects listed" error:runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope 13159ms (14:14:22.282) Jan 22 14:14:22 crc kubenswrapper[5099]: Trace[1074119269]: [13.159597925s] [13.159597925s] END Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.282911 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.283562 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d131cbc40a60b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:02.764838411 +0000 UTC m=+0.472588648,LastTimestamp:2026-01-22 14:14:02.764838411 +0000 UTC m=+0.472588648,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.287531 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d131cbc410387 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:02.764862343 +0000 UTC m=+0.472612580,LastTimestamp:2026-01-22 14:14:02.764862343 +0000 UTC m=+0.472612580,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: I0122 14:14:22.288991 5099 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.291946 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d131cbc412cc8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:02.764872904 +0000 UTC m=+0.472623141,LastTimestamp:2026-01-22 14:14:02.764872904 +0000 UTC m=+0.472623141,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.296398 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d131cbff5154d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:02.826995021 +0000 UTC m=+0.534745258,LastTimestamp:2026-01-22 14:14:02.826995021 +0000 UTC m=+0.534745258,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.301667 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d131cbc40a60b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d131cbc40a60b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:02.764838411 +0000 UTC m=+0.472588648,LastTimestamp:2026-01-22 14:14:02.8632892 +0000 UTC m=+0.571039437,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.308436 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d131cbc410387\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d131cbc410387 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:02.764862343 +0000 UTC m=+0.472612580,LastTimestamp:2026-01-22 14:14:02.863311512 +0000 UTC m=+0.571061749,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.313311 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d131cbc412cc8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d131cbc412cc8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:02.764872904 +0000 UTC m=+0.472623141,LastTimestamp:2026-01-22 14:14:02.863321333 +0000 UTC m=+0.571071570,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.318884 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d131cbc40a60b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d131cbc40a60b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:02.764838411 +0000 UTC m=+0.472588648,LastTimestamp:2026-01-22 14:14:02.864595318 +0000 UTC m=+0.572345555,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.325036 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d131cbc410387\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d131cbc410387 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:02.764862343 +0000 UTC m=+0.472612580,LastTimestamp:2026-01-22 14:14:02.864607139 +0000 UTC m=+0.572357376,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: I0122 14:14:22.331284 5099 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": EOF" start-of-body= Jan 22 14:14:22 crc kubenswrapper[5099]: I0122 14:14:22.331373 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": EOF" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.332769 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d131cbc412cc8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d131cbc412cc8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:02.764872904 +0000 UTC m=+0.472623141,LastTimestamp:2026-01-22 14:14:02.86461515 +0000 UTC m=+0.572365387,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: I0122 14:14:22.334576 5099 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:43378->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 22 14:14:22 crc kubenswrapper[5099]: I0122 14:14:22.334690 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:43378->192.168.126.11:17697: read: connection reset by peer" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.338666 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d131cbc40a60b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d131cbc40a60b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:02.764838411 +0000 UTC m=+0.472588648,LastTimestamp:2026-01-22 14:14:02.864926605 +0000 UTC m=+0.572676872,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.343280 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d131cbc410387\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d131cbc410387 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:02.764862343 +0000 UTC m=+0.472612580,LastTimestamp:2026-01-22 14:14:02.865061656 +0000 UTC m=+0.572811933,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.347992 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d131cbc412cc8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d131cbc412cc8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:02.764872904 +0000 UTC m=+0.472623141,LastTimestamp:2026-01-22 14:14:02.865083708 +0000 UTC m=+0.572833985,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.353468 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d131cbc40a60b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d131cbc40a60b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:02.764838411 +0000 UTC m=+0.472588648,LastTimestamp:2026-01-22 14:14:02.865562167 +0000 UTC m=+0.573312404,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.359333 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d131cbc410387\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d131cbc410387 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:02.764862343 +0000 UTC m=+0.472612580,LastTimestamp:2026-01-22 14:14:02.865580918 +0000 UTC m=+0.573331155,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.363804 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d131cbc412cc8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d131cbc412cc8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:02.764872904 +0000 UTC m=+0.472623141,LastTimestamp:2026-01-22 14:14:02.865590819 +0000 UTC m=+0.573341046,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.369634 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d131cbc40a60b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d131cbc40a60b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:02.764838411 +0000 UTC m=+0.472588648,LastTimestamp:2026-01-22 14:14:02.866249014 +0000 UTC m=+0.573999251,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.376014 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d131cbc410387\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d131cbc410387 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:02.764862343 +0000 UTC m=+0.472612580,LastTimestamp:2026-01-22 14:14:02.866259115 +0000 UTC m=+0.574009352,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.382091 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d131cbc412cc8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d131cbc412cc8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:02.764872904 +0000 UTC m=+0.472623141,LastTimestamp:2026-01-22 14:14:02.866267125 +0000 UTC m=+0.574017362,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.386764 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d131cbc40a60b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d131cbc40a60b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:02.764838411 +0000 UTC m=+0.472588648,LastTimestamp:2026-01-22 14:14:02.866858774 +0000 UTC m=+0.574609011,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.388650 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d131cbc410387\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d131cbc410387 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:02.764862343 +0000 UTC m=+0.472612580,LastTimestamp:2026-01-22 14:14:02.866872325 +0000 UTC m=+0.574622562,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.392126 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d131cbc412cc8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d131cbc412cc8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:02.764872904 +0000 UTC m=+0.472623141,LastTimestamp:2026-01-22 14:14:02.866880606 +0000 UTC m=+0.574630843,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.393144 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d131cbc40a60b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d131cbc40a60b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:02.764838411 +0000 UTC m=+0.472588648,LastTimestamp:2026-01-22 14:14:02.867819343 +0000 UTC m=+0.575569570,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.398048 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d131cbc410387\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d131cbc410387 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:02.764862343 +0000 UTC m=+0.472612580,LastTimestamp:2026-01-22 14:14:02.867834364 +0000 UTC m=+0.575584601,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.403281 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d131ce237716b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:03.401769323 +0000 UTC m=+1.109519570,LastTimestamp:2026-01-22 14:14:03.401769323 +0000 UTC m=+1.109519570,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.411755 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d131ce2385162 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:03.401826658 +0000 UTC m=+1.109576905,LastTimestamp:2026-01-22 14:14:03.401826658 +0000 UTC m=+1.109576905,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.417488 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188d131ce24363ea openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:03.402552298 +0000 UTC m=+1.110302535,LastTimestamp:2026-01-22 14:14:03.402552298 +0000 UTC m=+1.110302535,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.422994 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d131ce248598d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:03.402877325 +0000 UTC m=+1.110627602,LastTimestamp:2026-01-22 14:14:03.402877325 +0000 UTC m=+1.110627602,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.427025 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d131ce2664d41 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:03.404840257 +0000 UTC m=+1.112590564,LastTimestamp:2026-01-22 14:14:03.404840257 +0000 UTC m=+1.112590564,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.433203 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188d131d007706c5 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:03.909252805 +0000 UTC m=+1.617003052,LastTimestamp:2026-01-22 14:14:03.909252805 +0000 UTC m=+1.617003052,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.443554 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d131d007797aa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:03.909289898 +0000 UTC m=+1.617040135,LastTimestamp:2026-01-22 14:14:03.909289898 +0000 UTC m=+1.617040135,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.450003 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d131d0077f8be openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:03.90931475 +0000 UTC m=+1.617064987,LastTimestamp:2026-01-22 14:14:03.90931475 +0000 UTC m=+1.617064987,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.457976 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d131d00799e07 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:03.909422599 +0000 UTC m=+1.617172836,LastTimestamp:2026-01-22 14:14:03.909422599 +0000 UTC m=+1.617172836,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.463970 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d131d0079aed3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:03.909426899 +0000 UTC m=+1.617177136,LastTimestamp:2026-01-22 14:14:03.909426899 +0000 UTC m=+1.617177136,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.471105 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d131d0166f374 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:03.9249765 +0000 UTC m=+1.632726737,LastTimestamp:2026-01-22 14:14:03.9249765 +0000 UTC m=+1.632726737,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.476982 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d131d01ceba5e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:03.93177763 +0000 UTC m=+1.639527867,LastTimestamp:2026-01-22 14:14:03.93177763 +0000 UTC m=+1.639527867,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.481621 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d131d01ced2c8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:03.93178388 +0000 UTC m=+1.639534117,LastTimestamp:2026-01-22 14:14:03.93178388 +0000 UTC m=+1.639534117,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.487821 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d131d01cf1a50 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:03.931802192 +0000 UTC m=+1.639552429,LastTimestamp:2026-01-22 14:14:03.931802192 +0000 UTC m=+1.639552429,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.492426 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188d131d01d3babf openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:03.932105407 +0000 UTC m=+1.639855644,LastTimestamp:2026-01-22 14:14:03.932105407 +0000 UTC m=+1.639855644,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.496315 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d131d01e1beef openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:03.933023983 +0000 UTC m=+1.640774220,LastTimestamp:2026-01-22 14:14:03.933023983 +0000 UTC m=+1.640774220,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.501362 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d131d119a1cfe openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:04.196764926 +0000 UTC m=+1.904515163,LastTimestamp:2026-01-22 14:14:04.196764926 +0000 UTC m=+1.904515163,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.506358 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d131d12642eec openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:04.210007788 +0000 UTC m=+1.917758055,LastTimestamp:2026-01-22 14:14:04.210007788 +0000 UTC m=+1.917758055,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.511584 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d131d12782791 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:04.211316625 +0000 UTC m=+1.919066862,LastTimestamp:2026-01-22 14:14:04.211316625 +0000 UTC m=+1.919066862,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.517054 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d131d27a1caff openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:04.566366975 +0000 UTC m=+2.274117212,LastTimestamp:2026-01-22 14:14:04.566366975 +0000 UTC m=+2.274117212,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.521423 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d131d284964a1 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:04.577350817 +0000 UTC m=+2.285101054,LastTimestamp:2026-01-22 14:14:04.577350817 +0000 UTC m=+2.285101054,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.528585 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d131d285f6538 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:04.57879276 +0000 UTC m=+2.286542997,LastTimestamp:2026-01-22 14:14:04.57879276 +0000 UTC m=+2.286542997,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.533561 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d131d3360b2a9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:04.763427497 +0000 UTC m=+2.471177734,LastTimestamp:2026-01-22 14:14:04.763427497 +0000 UTC m=+2.471177734,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.537993 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d131d33f13dbc openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:04.772900284 +0000 UTC m=+2.480650521,LastTimestamp:2026-01-22 14:14:04.772900284 +0000 UTC m=+2.480650521,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.542745 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d131d34ab9d91 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:04.785114513 +0000 UTC m=+2.492864750,LastTimestamp:2026-01-22 14:14:04.785114513 +0000 UTC m=+2.492864750,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.548013 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d131d34c650f6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:04.786864374 +0000 UTC m=+2.494614641,LastTimestamp:2026-01-22 14:14:04.786864374 +0000 UTC m=+2.494614641,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.554741 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188d131d35049414 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:04.790944788 +0000 UTC m=+2.498695025,LastTimestamp:2026-01-22 14:14:04.790944788 +0000 UTC m=+2.498695025,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.560687 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d131d358c7643 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:04.799850051 +0000 UTC m=+2.507600308,LastTimestamp:2026-01-22 14:14:04.799850051 +0000 UTC m=+2.507600308,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.565879 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d131d43416794 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:05.029812116 +0000 UTC m=+2.737562353,LastTimestamp:2026-01-22 14:14:05.029812116 +0000 UTC m=+2.737562353,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.569796 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188d131d438f1897 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:05.034903703 +0000 UTC m=+2.742653940,LastTimestamp:2026-01-22 14:14:05.034903703 +0000 UTC m=+2.742653940,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.575920 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d131d43cd473d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:05.038978877 +0000 UTC m=+2.746729104,LastTimestamp:2026-01-22 14:14:05.038978877 +0000 UTC m=+2.746729104,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.580822 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d131d440159b2 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:05.042391474 +0000 UTC m=+2.750141711,LastTimestamp:2026-01-22 14:14:05.042391474 +0000 UTC m=+2.750141711,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.586660 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d131d440969f6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:05.042919926 +0000 UTC m=+2.750670163,LastTimestamp:2026-01-22 14:14:05.042919926 +0000 UTC m=+2.750670163,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.591723 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d131d44124d5e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:05.04350243 +0000 UTC m=+2.751252657,LastTimestamp:2026-01-22 14:14:05.04350243 +0000 UTC m=+2.751252657,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.596006 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188d131d443222ed openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:05.045588717 +0000 UTC m=+2.753338954,LastTimestamp:2026-01-22 14:14:05.045588717 +0000 UTC m=+2.753338954,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.600045 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d131d45361b32 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:05.062626098 +0000 UTC m=+2.770376335,LastTimestamp:2026-01-22 14:14:05.062626098 +0000 UTC m=+2.770376335,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.603809 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d131d45583317 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:05.064860439 +0000 UTC m=+2.772610666,LastTimestamp:2026-01-22 14:14:05.064860439 +0000 UTC m=+2.772610666,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.608510 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d131d459e04d1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:05.069436113 +0000 UTC m=+2.777186350,LastTimestamp:2026-01-22 14:14:05.069436113 +0000 UTC m=+2.777186350,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.612281 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d131d4f3d9cbb openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:05.230890171 +0000 UTC m=+2.938640398,LastTimestamp:2026-01-22 14:14:05.230890171 +0000 UTC m=+2.938640398,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.616882 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d131d4fc305bd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:05.239633341 +0000 UTC m=+2.947383578,LastTimestamp:2026-01-22 14:14:05.239633341 +0000 UTC m=+2.947383578,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.621338 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d131d5010df74 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:05.244735348 +0000 UTC m=+2.952485605,LastTimestamp:2026-01-22 14:14:05.244735348 +0000 UTC m=+2.952485605,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.627050 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d131d5023ba52 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:05.245971026 +0000 UTC m=+2.953721283,LastTimestamp:2026-01-22 14:14:05.245971026 +0000 UTC m=+2.953721283,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.631670 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d131d510c2ce0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:05.261204704 +0000 UTC m=+2.968954941,LastTimestamp:2026-01-22 14:14:05.261204704 +0000 UTC m=+2.968954941,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.635731 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d131d51426ec4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:05.264760516 +0000 UTC m=+2.972510753,LastTimestamp:2026-01-22 14:14:05.264760516 +0000 UTC m=+2.972510753,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.639977 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d131d5d1f5641 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:05.463787073 +0000 UTC m=+3.171537310,LastTimestamp:2026-01-22 14:14:05.463787073 +0000 UTC m=+3.171537310,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.646918 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d131d5d396f76 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:05.465497462 +0000 UTC m=+3.173247699,LastTimestamp:2026-01-22 14:14:05.465497462 +0000 UTC m=+3.173247699,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.651552 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d131d5e1d98c4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:05.480450244 +0000 UTC m=+3.188200481,LastTimestamp:2026-01-22 14:14:05.480450244 +0000 UTC m=+3.188200481,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.656011 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d131d5e1dc9c1 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:05.480462785 +0000 UTC m=+3.188213022,LastTimestamp:2026-01-22 14:14:05.480462785 +0000 UTC m=+3.188213022,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.661495 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d131d5e2f435f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:05.481608031 +0000 UTC m=+3.189358298,LastTimestamp:2026-01-22 14:14:05.481608031 +0000 UTC m=+3.189358298,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.667222 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d131d6aecd511 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:05.695358225 +0000 UTC m=+3.403108462,LastTimestamp:2026-01-22 14:14:05.695358225 +0000 UTC m=+3.403108462,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.673022 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d131d6c3892e1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:05.717099233 +0000 UTC m=+3.424849480,LastTimestamp:2026-01-22 14:14:05.717099233 +0000 UTC m=+3.424849480,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.678276 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d131d6c59f85f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:05.719287903 +0000 UTC m=+3.427038140,LastTimestamp:2026-01-22 14:14:05.719287903 +0000 UTC m=+3.427038140,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.683087 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d131d7239c7dd openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:05.817841629 +0000 UTC m=+3.525591866,LastTimestamp:2026-01-22 14:14:05.817841629 +0000 UTC m=+3.525591866,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: I0122 14:14:22.687867 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.688290 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d131d799652bf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:05.941347007 +0000 UTC m=+3.649097244,LastTimestamp:2026-01-22 14:14:05.941347007 +0000 UTC m=+3.649097244,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.689843 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d131d7a952d10 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:05.95804904 +0000 UTC m=+3.665799267,LastTimestamp:2026-01-22 14:14:05.95804904 +0000 UTC m=+3.665799267,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.693654 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d131d80ade675 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:06.060332661 +0000 UTC m=+3.768082888,LastTimestamp:2026-01-22 14:14:06.060332661 +0000 UTC m=+3.768082888,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.698173 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d131d816aaf93 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:06.072704915 +0000 UTC m=+3.780455152,LastTimestamp:2026-01-22 14:14:06.072704915 +0000 UTC m=+3.780455152,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.704009 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d131daef76704 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:06.836901636 +0000 UTC m=+4.544651873,LastTimestamp:2026-01-22 14:14:06.836901636 +0000 UTC m=+4.544651873,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.710008 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d131dbb94570e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:07.048513294 +0000 UTC m=+4.756263541,LastTimestamp:2026-01-22 14:14:07.048513294 +0000 UTC m=+4.756263541,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.715249 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d131dbc6d8632 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:07.062746674 +0000 UTC m=+4.770496961,LastTimestamp:2026-01-22 14:14:07.062746674 +0000 UTC m=+4.770496961,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.720796 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d131dbc82df7b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:07.064145787 +0000 UTC m=+4.771896064,LastTimestamp:2026-01-22 14:14:07.064145787 +0000 UTC m=+4.771896064,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.725423 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d131dc7ddde24 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:07.254658596 +0000 UTC m=+4.962408833,LastTimestamp:2026-01-22 14:14:07.254658596 +0000 UTC m=+4.962408833,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.730767 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d131dc8b41b3e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:07.268698942 +0000 UTC m=+4.976449179,LastTimestamp:2026-01-22 14:14:07.268698942 +0000 UTC m=+4.976449179,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.735367 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d131dc8c8aab9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:07.270046393 +0000 UTC m=+4.977796630,LastTimestamp:2026-01-22 14:14:07.270046393 +0000 UTC m=+4.977796630,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.739533 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d131dd4686025 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:07.465062437 +0000 UTC m=+5.172812714,LastTimestamp:2026-01-22 14:14:07.465062437 +0000 UTC m=+5.172812714,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.743637 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d131dd561156f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:07.481361775 +0000 UTC m=+5.189112052,LastTimestamp:2026-01-22 14:14:07.481361775 +0000 UTC m=+5.189112052,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.748206 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d131dd57deed7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:07.483252439 +0000 UTC m=+5.191002676,LastTimestamp:2026-01-22 14:14:07.483252439 +0000 UTC m=+5.191002676,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.753067 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d131de29a3880 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:07.703210112 +0000 UTC m=+5.410960349,LastTimestamp:2026-01-22 14:14:07.703210112 +0000 UTC m=+5.410960349,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.758010 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d131de367578f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:07.716652943 +0000 UTC m=+5.424403190,LastTimestamp:2026-01-22 14:14:07.716652943 +0000 UTC m=+5.424403190,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.762123 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d131de3800fdf openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:07.718272991 +0000 UTC m=+5.426023238,LastTimestamp:2026-01-22 14:14:07.718272991 +0000 UTC m=+5.426023238,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.766393 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d131df2a34dfd openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:07.972240893 +0000 UTC m=+5.679991140,LastTimestamp:2026-01-22 14:14:07.972240893 +0000 UTC m=+5.679991140,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.770624 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d131df3611101 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:07.984677121 +0000 UTC m=+5.692427378,LastTimestamp:2026-01-22 14:14:07.984677121 +0000 UTC m=+5.692427378,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: I0122 14:14:22.778721 5099 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 14:14:22 crc kubenswrapper[5099]: I0122 14:14:22.778801 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.780336 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Jan 22 14:14:22 crc kubenswrapper[5099]: &Event{ObjectMeta:{kube-controller-manager-crc.188d131f11309c1b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded Jan 22 14:14:22 crc kubenswrapper[5099]: body: Jan 22 14:14:22 crc kubenswrapper[5099]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:12.779785243 +0000 UTC m=+10.487535480,LastTimestamp:2026-01-22 14:14:12.779785243 +0000 UTC m=+10.487535480,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 22 14:14:22 crc kubenswrapper[5099]: > Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.787657 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d131f1132bb3e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:12.779924286 +0000 UTC m=+10.487674523,LastTimestamp:2026-01-22 14:14:12.779924286 +0000 UTC m=+10.487674523,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.793549 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 22 14:14:22 crc kubenswrapper[5099]: &Event{ObjectMeta:{kube-apiserver-crc.188d1320127b6bd5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:6443/livez": context deadline exceeded Jan 22 14:14:22 crc kubenswrapper[5099]: body: Jan 22 14:14:22 crc kubenswrapper[5099]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:17.096432597 +0000 UTC m=+14.804182834,LastTimestamp:2026-01-22 14:14:17.096432597 +0000 UTC m=+14.804182834,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 22 14:14:22 crc kubenswrapper[5099]: > Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.797817 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d1320127c6df6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:6443/livez\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:17.096498678 +0000 UTC m=+14.804248915,LastTimestamp:2026-01-22 14:14:17.096498678 +0000 UTC m=+14.804248915,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.803593 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 22 14:14:22 crc kubenswrapper[5099]: &Event{ObjectMeta:{kube-apiserver-crc.188d13201d978b00 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 22 14:14:22 crc kubenswrapper[5099]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 22 14:14:22 crc kubenswrapper[5099]: Jan 22 14:14:22 crc kubenswrapper[5099]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:17.28282496 +0000 UTC m=+14.990575217,LastTimestamp:2026-01-22 14:14:17.28282496 +0000 UTC m=+14.990575217,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 22 14:14:22 crc kubenswrapper[5099]: > Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.808681 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d13201d989c0e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:17.282894862 +0000 UTC m=+14.990645109,LastTimestamp:2026-01-22 14:14:17.282894862 +0000 UTC m=+14.990645109,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.813628 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 22 14:14:22 crc kubenswrapper[5099]: &Event{ObjectMeta:{kube-apiserver-crc.188d13214a81cc38 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": EOF Jan 22 14:14:22 crc kubenswrapper[5099]: body: Jan 22 14:14:22 crc kubenswrapper[5099]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:22.33134188 +0000 UTC m=+20.039092117,LastTimestamp:2026-01-22 14:14:22.33134188 +0000 UTC m=+20.039092117,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 22 14:14:22 crc kubenswrapper[5099]: > Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.816096 5099 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.818843 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d13214a829a70 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": EOF,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:22.331394672 +0000 UTC m=+20.039144909,LastTimestamp:2026-01-22 14:14:22.331394672 +0000 UTC m=+20.039144909,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.823780 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 22 14:14:22 crc kubenswrapper[5099]: &Event{ObjectMeta:{kube-apiserver-crc.188d13214ab416e8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:43378->192.168.126.11:17697: read: connection reset by peer Jan 22 14:14:22 crc kubenswrapper[5099]: body: Jan 22 14:14:22 crc kubenswrapper[5099]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:22.3346378 +0000 UTC m=+20.042388037,LastTimestamp:2026-01-22 14:14:22.3346378 +0000 UTC m=+20.042388037,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 22 14:14:22 crc kubenswrapper[5099]: > Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.828461 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d13214ab55f5c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:43378->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:22.334721884 +0000 UTC m=+20.042472121,LastTimestamp:2026-01-22 14:14:22.334721884 +0000 UTC m=+20.042472121,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.834500 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Jan 22 14:14:22 crc kubenswrapper[5099]: &Event{ObjectMeta:{kube-controller-manager-crc.188d1321652d0bbe openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 22 14:14:22 crc kubenswrapper[5099]: body: Jan 22 14:14:22 crc kubenswrapper[5099]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:22.778772414 +0000 UTC m=+20.486522651,LastTimestamp:2026-01-22 14:14:22.778772414 +0000 UTC m=+20.486522651,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 22 14:14:22 crc kubenswrapper[5099]: > Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.839383 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d1321652dcdb2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:22.778822066 +0000 UTC m=+20.486572303,LastTimestamp:2026-01-22 14:14:22.778822066 +0000 UTC m=+20.486572303,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:22 crc kubenswrapper[5099]: I0122 14:14:22.902835 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 22 14:14:22 crc kubenswrapper[5099]: I0122 14:14:22.904934 5099 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="09d573200808005b83d28776eefefe413623d38a8cfa61a0852af58d9041f5c1" exitCode=255 Jan 22 14:14:22 crc kubenswrapper[5099]: I0122 14:14:22.904986 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"09d573200808005b83d28776eefefe413623d38a8cfa61a0852af58d9041f5c1"} Jan 22 14:14:22 crc kubenswrapper[5099]: I0122 14:14:22.905273 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:22 crc kubenswrapper[5099]: I0122 14:14:22.906319 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:22 crc kubenswrapper[5099]: I0122 14:14:22.906370 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:22 crc kubenswrapper[5099]: I0122 14:14:22.906387 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.906822 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:22 crc kubenswrapper[5099]: I0122 14:14:22.907217 5099 scope.go:117] "RemoveContainer" containerID="09d573200808005b83d28776eefefe413623d38a8cfa61a0852af58d9041f5c1" Jan 22 14:14:22 crc kubenswrapper[5099]: E0122 14:14:22.916450 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d131d6c59f85f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d131d6c59f85f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:05.719287903 +0000 UTC m=+3.427038140,LastTimestamp:2026-01-22 14:14:22.908435523 +0000 UTC m=+20.616185760,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:23 crc kubenswrapper[5099]: E0122 14:14:23.119838 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d131d799652bf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d131d799652bf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:05.941347007 +0000 UTC m=+3.649097244,LastTimestamp:2026-01-22 14:14:23.11410926 +0000 UTC m=+20.821859497,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:23 crc kubenswrapper[5099]: E0122 14:14:23.133518 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d131d7a952d10\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d131d7a952d10 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:05.95804904 +0000 UTC m=+3.665799267,LastTimestamp:2026-01-22 14:14:23.125524863 +0000 UTC m=+20.833275100,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:23 crc kubenswrapper[5099]: I0122 14:14:23.688296 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:14:23 crc kubenswrapper[5099]: I0122 14:14:23.911110 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 22 14:14:23 crc kubenswrapper[5099]: I0122 14:14:23.912921 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"4b367e33f487ff932846cca1ba790854cf71df98bee272995caae24d6d1a393b"} Jan 22 14:14:23 crc kubenswrapper[5099]: I0122 14:14:23.913147 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:23 crc kubenswrapper[5099]: I0122 14:14:23.913813 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:23 crc kubenswrapper[5099]: I0122 14:14:23.913867 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:23 crc kubenswrapper[5099]: I0122 14:14:23.913880 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:23 crc kubenswrapper[5099]: E0122 14:14:23.914390 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:24 crc kubenswrapper[5099]: I0122 14:14:24.686565 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:14:24 crc kubenswrapper[5099]: I0122 14:14:24.918390 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 22 14:14:24 crc kubenswrapper[5099]: I0122 14:14:24.919461 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 22 14:14:24 crc kubenswrapper[5099]: I0122 14:14:24.921921 5099 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="4b367e33f487ff932846cca1ba790854cf71df98bee272995caae24d6d1a393b" exitCode=255 Jan 22 14:14:24 crc kubenswrapper[5099]: I0122 14:14:24.922005 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"4b367e33f487ff932846cca1ba790854cf71df98bee272995caae24d6d1a393b"} Jan 22 14:14:24 crc kubenswrapper[5099]: I0122 14:14:24.922130 5099 scope.go:117] "RemoveContainer" containerID="09d573200808005b83d28776eefefe413623d38a8cfa61a0852af58d9041f5c1" Jan 22 14:14:24 crc kubenswrapper[5099]: I0122 14:14:24.922275 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:24 crc kubenswrapper[5099]: I0122 14:14:24.923026 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:24 crc kubenswrapper[5099]: I0122 14:14:24.923075 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:24 crc kubenswrapper[5099]: I0122 14:14:24.923089 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:24 crc kubenswrapper[5099]: E0122 14:14:24.923519 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:24 crc kubenswrapper[5099]: I0122 14:14:24.923834 5099 scope.go:117] "RemoveContainer" containerID="4b367e33f487ff932846cca1ba790854cf71df98bee272995caae24d6d1a393b" Jan 22 14:14:24 crc kubenswrapper[5099]: E0122 14:14:24.924129 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 14:14:24 crc kubenswrapper[5099]: E0122 14:14:24.929127 5099 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d1321e50bd816 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:24.92408015 +0000 UTC m=+22.631830387,LastTimestamp:2026-01-22 14:14:24.92408015 +0000 UTC m=+22.631830387,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:25 crc kubenswrapper[5099]: E0122 14:14:25.315317 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 22 14:14:25 crc kubenswrapper[5099]: I0122 14:14:25.541122 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:25 crc kubenswrapper[5099]: I0122 14:14:25.542613 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:25 crc kubenswrapper[5099]: I0122 14:14:25.542681 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:25 crc kubenswrapper[5099]: I0122 14:14:25.542696 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:25 crc kubenswrapper[5099]: I0122 14:14:25.542728 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 14:14:25 crc kubenswrapper[5099]: E0122 14:14:25.552324 5099 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 22 14:14:25 crc kubenswrapper[5099]: I0122 14:14:25.689784 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:14:25 crc kubenswrapper[5099]: I0122 14:14:25.926022 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 22 14:14:25 crc kubenswrapper[5099]: I0122 14:14:25.928738 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:25 crc kubenswrapper[5099]: I0122 14:14:25.929744 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:25 crc kubenswrapper[5099]: I0122 14:14:25.929784 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:25 crc kubenswrapper[5099]: I0122 14:14:25.929797 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:25 crc kubenswrapper[5099]: E0122 14:14:25.935797 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:25 crc kubenswrapper[5099]: I0122 14:14:25.936049 5099 scope.go:117] "RemoveContainer" containerID="4b367e33f487ff932846cca1ba790854cf71df98bee272995caae24d6d1a393b" Jan 22 14:14:25 crc kubenswrapper[5099]: E0122 14:14:25.936382 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 14:14:25 crc kubenswrapper[5099]: E0122 14:14:25.943905 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d1321e50bd816\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d1321e50bd816 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:24.92408015 +0000 UTC m=+22.631830387,LastTimestamp:2026-01-22 14:14:25.93633063 +0000 UTC m=+23.644080867,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:26 crc kubenswrapper[5099]: I0122 14:14:26.686472 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:14:27 crc kubenswrapper[5099]: I0122 14:14:27.689464 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:14:28 crc kubenswrapper[5099]: I0122 14:14:28.687449 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:14:29 crc kubenswrapper[5099]: E0122 14:14:29.053982 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 22 14:14:29 crc kubenswrapper[5099]: I0122 14:14:29.689541 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:14:29 crc kubenswrapper[5099]: I0122 14:14:29.784527 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:14:29 crc kubenswrapper[5099]: I0122 14:14:29.784773 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:29 crc kubenswrapper[5099]: I0122 14:14:29.785799 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:29 crc kubenswrapper[5099]: I0122 14:14:29.785879 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:29 crc kubenswrapper[5099]: I0122 14:14:29.785906 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:29 crc kubenswrapper[5099]: E0122 14:14:29.786396 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:29 crc kubenswrapper[5099]: I0122 14:14:29.791269 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:14:29 crc kubenswrapper[5099]: E0122 14:14:29.893853 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 22 14:14:29 crc kubenswrapper[5099]: I0122 14:14:29.939696 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:29 crc kubenswrapper[5099]: I0122 14:14:29.940894 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:29 crc kubenswrapper[5099]: I0122 14:14:29.940968 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:29 crc kubenswrapper[5099]: I0122 14:14:29.940988 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:29 crc kubenswrapper[5099]: E0122 14:14:29.941600 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:30 crc kubenswrapper[5099]: I0122 14:14:30.687269 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:14:31 crc kubenswrapper[5099]: I0122 14:14:31.691109 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:14:32 crc kubenswrapper[5099]: E0122 14:14:32.321850 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 22 14:14:32 crc kubenswrapper[5099]: I0122 14:14:32.553441 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:32 crc kubenswrapper[5099]: I0122 14:14:32.554546 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:32 crc kubenswrapper[5099]: I0122 14:14:32.554593 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:32 crc kubenswrapper[5099]: I0122 14:14:32.554604 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:32 crc kubenswrapper[5099]: I0122 14:14:32.554630 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 14:14:32 crc kubenswrapper[5099]: E0122 14:14:32.568668 5099 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 22 14:14:32 crc kubenswrapper[5099]: I0122 14:14:32.688641 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:14:32 crc kubenswrapper[5099]: E0122 14:14:32.816472 5099 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 14:14:33 crc kubenswrapper[5099]: E0122 14:14:33.315149 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 22 14:14:33 crc kubenswrapper[5099]: E0122 14:14:33.380132 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 22 14:14:33 crc kubenswrapper[5099]: I0122 14:14:33.687360 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:14:33 crc kubenswrapper[5099]: I0122 14:14:33.914192 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:14:33 crc kubenswrapper[5099]: I0122 14:14:33.915329 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:33 crc kubenswrapper[5099]: I0122 14:14:33.916758 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:33 crc kubenswrapper[5099]: I0122 14:14:33.916827 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:33 crc kubenswrapper[5099]: I0122 14:14:33.916846 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:33 crc kubenswrapper[5099]: E0122 14:14:33.917417 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:33 crc kubenswrapper[5099]: I0122 14:14:33.917774 5099 scope.go:117] "RemoveContainer" containerID="4b367e33f487ff932846cca1ba790854cf71df98bee272995caae24d6d1a393b" Jan 22 14:14:33 crc kubenswrapper[5099]: E0122 14:14:33.918035 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 14:14:33 crc kubenswrapper[5099]: E0122 14:14:33.924968 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d1321e50bd816\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d1321e50bd816 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:24.92408015 +0000 UTC m=+22.631830387,LastTimestamp:2026-01-22 14:14:33.917995694 +0000 UTC m=+31.625745931,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:34 crc kubenswrapper[5099]: I0122 14:14:34.634565 5099 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:14:34 crc kubenswrapper[5099]: I0122 14:14:34.634868 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:34 crc kubenswrapper[5099]: I0122 14:14:34.635954 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:34 crc kubenswrapper[5099]: I0122 14:14:34.636011 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:34 crc kubenswrapper[5099]: I0122 14:14:34.636023 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:34 crc kubenswrapper[5099]: E0122 14:14:34.636532 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:34 crc kubenswrapper[5099]: I0122 14:14:34.636801 5099 scope.go:117] "RemoveContainer" containerID="4b367e33f487ff932846cca1ba790854cf71df98bee272995caae24d6d1a393b" Jan 22 14:14:34 crc kubenswrapper[5099]: E0122 14:14:34.644275 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d131d6c59f85f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d131d6c59f85f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:05.719287903 +0000 UTC m=+3.427038140,LastTimestamp:2026-01-22 14:14:34.63792809 +0000 UTC m=+32.345678327,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:34 crc kubenswrapper[5099]: I0122 14:14:34.688097 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:14:34 crc kubenswrapper[5099]: E0122 14:14:34.882381 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d131d799652bf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d131d799652bf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:05.941347007 +0000 UTC m=+3.649097244,LastTimestamp:2026-01-22 14:14:34.874884445 +0000 UTC m=+32.582634682,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:34 crc kubenswrapper[5099]: E0122 14:14:34.894674 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d131d7a952d10\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d131d7a952d10 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:05.95804904 +0000 UTC m=+3.665799267,LastTimestamp:2026-01-22 14:14:34.88781803 +0000 UTC m=+32.595568267,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:34 crc kubenswrapper[5099]: I0122 14:14:34.957796 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 22 14:14:34 crc kubenswrapper[5099]: I0122 14:14:34.960929 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"3f748f2f1474e6996ac0e1ad8c515211118a44a6e094f6ed9ebc0211dbfc0359"} Jan 22 14:14:34 crc kubenswrapper[5099]: I0122 14:14:34.961476 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:34 crc kubenswrapper[5099]: I0122 14:14:34.962188 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:34 crc kubenswrapper[5099]: I0122 14:14:34.962238 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:34 crc kubenswrapper[5099]: I0122 14:14:34.962252 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:34 crc kubenswrapper[5099]: E0122 14:14:34.962701 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:35 crc kubenswrapper[5099]: I0122 14:14:35.687085 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:14:36 crc kubenswrapper[5099]: I0122 14:14:36.690504 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:14:36 crc kubenswrapper[5099]: I0122 14:14:36.971125 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 22 14:14:36 crc kubenswrapper[5099]: I0122 14:14:36.971811 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 22 14:14:36 crc kubenswrapper[5099]: I0122 14:14:36.974247 5099 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="3f748f2f1474e6996ac0e1ad8c515211118a44a6e094f6ed9ebc0211dbfc0359" exitCode=255 Jan 22 14:14:36 crc kubenswrapper[5099]: I0122 14:14:36.974309 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"3f748f2f1474e6996ac0e1ad8c515211118a44a6e094f6ed9ebc0211dbfc0359"} Jan 22 14:14:36 crc kubenswrapper[5099]: I0122 14:14:36.974353 5099 scope.go:117] "RemoveContainer" containerID="4b367e33f487ff932846cca1ba790854cf71df98bee272995caae24d6d1a393b" Jan 22 14:14:36 crc kubenswrapper[5099]: I0122 14:14:36.974722 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:36 crc kubenswrapper[5099]: I0122 14:14:36.975598 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:36 crc kubenswrapper[5099]: I0122 14:14:36.975660 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:36 crc kubenswrapper[5099]: I0122 14:14:36.975683 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:36 crc kubenswrapper[5099]: E0122 14:14:36.976279 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:36 crc kubenswrapper[5099]: I0122 14:14:36.976729 5099 scope.go:117] "RemoveContainer" containerID="3f748f2f1474e6996ac0e1ad8c515211118a44a6e094f6ed9ebc0211dbfc0359" Jan 22 14:14:36 crc kubenswrapper[5099]: E0122 14:14:36.977187 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 14:14:36 crc kubenswrapper[5099]: E0122 14:14:36.982732 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d1321e50bd816\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d1321e50bd816 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:24.92408015 +0000 UTC m=+22.631830387,LastTimestamp:2026-01-22 14:14:36.977103757 +0000 UTC m=+34.684854014,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:37 crc kubenswrapper[5099]: I0122 14:14:37.689466 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:14:37 crc kubenswrapper[5099]: I0122 14:14:37.980974 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 22 14:14:38 crc kubenswrapper[5099]: I0122 14:14:38.688938 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:14:39 crc kubenswrapper[5099]: E0122 14:14:39.329189 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 22 14:14:39 crc kubenswrapper[5099]: I0122 14:14:39.568820 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:39 crc kubenswrapper[5099]: I0122 14:14:39.570796 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:39 crc kubenswrapper[5099]: I0122 14:14:39.570860 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:39 crc kubenswrapper[5099]: I0122 14:14:39.570877 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:39 crc kubenswrapper[5099]: I0122 14:14:39.570907 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 14:14:39 crc kubenswrapper[5099]: E0122 14:14:39.582598 5099 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 22 14:14:39 crc kubenswrapper[5099]: I0122 14:14:39.688967 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:14:40 crc kubenswrapper[5099]: I0122 14:14:40.689068 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:14:41 crc kubenswrapper[5099]: I0122 14:14:41.687922 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:14:42 crc kubenswrapper[5099]: I0122 14:14:42.689940 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:14:42 crc kubenswrapper[5099]: E0122 14:14:42.817205 5099 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 14:14:43 crc kubenswrapper[5099]: I0122 14:14:43.688070 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:14:44 crc kubenswrapper[5099]: I0122 14:14:44.635018 5099 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:14:44 crc kubenswrapper[5099]: I0122 14:14:44.635356 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:44 crc kubenswrapper[5099]: I0122 14:14:44.636367 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:44 crc kubenswrapper[5099]: I0122 14:14:44.636416 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:44 crc kubenswrapper[5099]: I0122 14:14:44.636435 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:44 crc kubenswrapper[5099]: E0122 14:14:44.636871 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:44 crc kubenswrapper[5099]: I0122 14:14:44.637255 5099 scope.go:117] "RemoveContainer" containerID="3f748f2f1474e6996ac0e1ad8c515211118a44a6e094f6ed9ebc0211dbfc0359" Jan 22 14:14:44 crc kubenswrapper[5099]: E0122 14:14:44.637527 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 14:14:44 crc kubenswrapper[5099]: E0122 14:14:44.644276 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d1321e50bd816\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d1321e50bd816 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:24.92408015 +0000 UTC m=+22.631830387,LastTimestamp:2026-01-22 14:14:44.63747011 +0000 UTC m=+42.345220357,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:44 crc kubenswrapper[5099]: I0122 14:14:44.689632 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:14:44 crc kubenswrapper[5099]: I0122 14:14:44.962530 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:14:45 crc kubenswrapper[5099]: I0122 14:14:45.007481 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:45 crc kubenswrapper[5099]: I0122 14:14:45.011319 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:45 crc kubenswrapper[5099]: I0122 14:14:45.011389 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:45 crc kubenswrapper[5099]: I0122 14:14:45.011405 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:45 crc kubenswrapper[5099]: E0122 14:14:45.011939 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:45 crc kubenswrapper[5099]: I0122 14:14:45.012251 5099 scope.go:117] "RemoveContainer" containerID="3f748f2f1474e6996ac0e1ad8c515211118a44a6e094f6ed9ebc0211dbfc0359" Jan 22 14:14:45 crc kubenswrapper[5099]: E0122 14:14:45.012508 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 14:14:45 crc kubenswrapper[5099]: E0122 14:14:45.018409 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d1321e50bd816\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d1321e50bd816 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:24.92408015 +0000 UTC m=+22.631830387,LastTimestamp:2026-01-22 14:14:45.012464895 +0000 UTC m=+42.720215132,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:45 crc kubenswrapper[5099]: I0122 14:14:45.689508 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:14:46 crc kubenswrapper[5099]: E0122 14:14:46.336058 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 22 14:14:46 crc kubenswrapper[5099]: E0122 14:14:46.574529 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 22 14:14:46 crc kubenswrapper[5099]: I0122 14:14:46.583025 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:46 crc kubenswrapper[5099]: I0122 14:14:46.584672 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:46 crc kubenswrapper[5099]: I0122 14:14:46.584754 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:46 crc kubenswrapper[5099]: I0122 14:14:46.584768 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:46 crc kubenswrapper[5099]: I0122 14:14:46.584799 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 14:14:46 crc kubenswrapper[5099]: E0122 14:14:46.594825 5099 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 22 14:14:46 crc kubenswrapper[5099]: I0122 14:14:46.688108 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:14:47 crc kubenswrapper[5099]: I0122 14:14:47.688523 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:14:48 crc kubenswrapper[5099]: I0122 14:14:48.688651 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:14:49 crc kubenswrapper[5099]: I0122 14:14:49.687429 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:14:50 crc kubenswrapper[5099]: I0122 14:14:50.687757 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:14:51 crc kubenswrapper[5099]: E0122 14:14:51.012755 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 22 14:14:51 crc kubenswrapper[5099]: I0122 14:14:51.059349 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 14:14:51 crc kubenswrapper[5099]: I0122 14:14:51.059903 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:51 crc kubenswrapper[5099]: I0122 14:14:51.060923 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:51 crc kubenswrapper[5099]: I0122 14:14:51.060995 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:51 crc kubenswrapper[5099]: I0122 14:14:51.061017 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:51 crc kubenswrapper[5099]: E0122 14:14:51.061702 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:51 crc kubenswrapper[5099]: E0122 14:14:51.396793 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 22 14:14:51 crc kubenswrapper[5099]: I0122 14:14:51.690342 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:14:52 crc kubenswrapper[5099]: I0122 14:14:52.687572 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:14:52 crc kubenswrapper[5099]: E0122 14:14:52.818450 5099 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 14:14:53 crc kubenswrapper[5099]: E0122 14:14:53.347499 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 22 14:14:53 crc kubenswrapper[5099]: E0122 14:14:53.354668 5099 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 22 14:14:53 crc kubenswrapper[5099]: I0122 14:14:53.595613 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:53 crc kubenswrapper[5099]: I0122 14:14:53.598269 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:53 crc kubenswrapper[5099]: I0122 14:14:53.598335 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:53 crc kubenswrapper[5099]: I0122 14:14:53.598358 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:53 crc kubenswrapper[5099]: I0122 14:14:53.598393 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 14:14:53 crc kubenswrapper[5099]: E0122 14:14:53.614816 5099 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 22 14:14:53 crc kubenswrapper[5099]: I0122 14:14:53.687586 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:14:54 crc kubenswrapper[5099]: I0122 14:14:54.688082 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:14:55 crc kubenswrapper[5099]: I0122 14:14:55.690561 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:14:56 crc kubenswrapper[5099]: I0122 14:14:56.687066 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:14:57 crc kubenswrapper[5099]: I0122 14:14:57.688140 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:14:57 crc kubenswrapper[5099]: I0122 14:14:57.761013 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:57 crc kubenswrapper[5099]: I0122 14:14:57.762416 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:57 crc kubenswrapper[5099]: I0122 14:14:57.762542 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:57 crc kubenswrapper[5099]: I0122 14:14:57.762623 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:57 crc kubenswrapper[5099]: E0122 14:14:57.763073 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:57 crc kubenswrapper[5099]: I0122 14:14:57.763509 5099 scope.go:117] "RemoveContainer" containerID="3f748f2f1474e6996ac0e1ad8c515211118a44a6e094f6ed9ebc0211dbfc0359" Jan 22 14:14:57 crc kubenswrapper[5099]: E0122 14:14:57.770805 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d131d6c59f85f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d131d6c59f85f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:05.719287903 +0000 UTC m=+3.427038140,LastTimestamp:2026-01-22 14:14:57.765692337 +0000 UTC m=+55.473442574,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:14:58 crc kubenswrapper[5099]: I0122 14:14:58.052411 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 22 14:14:58 crc kubenswrapper[5099]: I0122 14:14:58.055153 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"563099c8fdc6fc3a36ff525a462a2d830742426eacbf17781d1a891dab9018d8"} Jan 22 14:14:58 crc kubenswrapper[5099]: I0122 14:14:58.055529 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:14:58 crc kubenswrapper[5099]: I0122 14:14:58.057255 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:14:58 crc kubenswrapper[5099]: I0122 14:14:58.057304 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:14:58 crc kubenswrapper[5099]: I0122 14:14:58.057316 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:14:58 crc kubenswrapper[5099]: E0122 14:14:58.057682 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:14:58 crc kubenswrapper[5099]: I0122 14:14:58.687986 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:14:59 crc kubenswrapper[5099]: I0122 14:14:59.687461 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:15:00 crc kubenswrapper[5099]: I0122 14:15:00.062949 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 22 14:15:00 crc kubenswrapper[5099]: I0122 14:15:00.063954 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 22 14:15:00 crc kubenswrapper[5099]: I0122 14:15:00.065951 5099 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="563099c8fdc6fc3a36ff525a462a2d830742426eacbf17781d1a891dab9018d8" exitCode=255 Jan 22 14:15:00 crc kubenswrapper[5099]: I0122 14:15:00.066061 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"563099c8fdc6fc3a36ff525a462a2d830742426eacbf17781d1a891dab9018d8"} Jan 22 14:15:00 crc kubenswrapper[5099]: I0122 14:15:00.066155 5099 scope.go:117] "RemoveContainer" containerID="3f748f2f1474e6996ac0e1ad8c515211118a44a6e094f6ed9ebc0211dbfc0359" Jan 22 14:15:00 crc kubenswrapper[5099]: I0122 14:15:00.066445 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:00 crc kubenswrapper[5099]: I0122 14:15:00.067394 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:00 crc kubenswrapper[5099]: I0122 14:15:00.067435 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:00 crc kubenswrapper[5099]: I0122 14:15:00.067447 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:00 crc kubenswrapper[5099]: E0122 14:15:00.067787 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:00 crc kubenswrapper[5099]: I0122 14:15:00.068115 5099 scope.go:117] "RemoveContainer" containerID="563099c8fdc6fc3a36ff525a462a2d830742426eacbf17781d1a891dab9018d8" Jan 22 14:15:00 crc kubenswrapper[5099]: E0122 14:15:00.068411 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 14:15:00 crc kubenswrapper[5099]: E0122 14:15:00.074544 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d1321e50bd816\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d1321e50bd816 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:24.92408015 +0000 UTC m=+22.631830387,LastTimestamp:2026-01-22 14:15:00.068365633 +0000 UTC m=+57.776115870,Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:00 crc kubenswrapper[5099]: E0122 14:15:00.354193 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 22 14:15:00 crc kubenswrapper[5099]: I0122 14:15:00.615682 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:00 crc kubenswrapper[5099]: I0122 14:15:00.617042 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:00 crc kubenswrapper[5099]: I0122 14:15:00.617086 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:00 crc kubenswrapper[5099]: I0122 14:15:00.617111 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:00 crc kubenswrapper[5099]: I0122 14:15:00.617144 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 14:15:00 crc kubenswrapper[5099]: E0122 14:15:00.631003 5099 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 22 14:15:00 crc kubenswrapper[5099]: I0122 14:15:00.688258 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:15:01 crc kubenswrapper[5099]: I0122 14:15:01.071788 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 22 14:15:01 crc kubenswrapper[5099]: I0122 14:15:01.688330 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:15:02 crc kubenswrapper[5099]: I0122 14:15:02.688572 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:15:02 crc kubenswrapper[5099]: E0122 14:15:02.819633 5099 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 14:15:03 crc kubenswrapper[5099]: I0122 14:15:03.687754 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:15:04 crc kubenswrapper[5099]: I0122 14:15:04.634132 5099 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:15:04 crc kubenswrapper[5099]: I0122 14:15:04.634563 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:04 crc kubenswrapper[5099]: I0122 14:15:04.635704 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:04 crc kubenswrapper[5099]: I0122 14:15:04.635764 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:04 crc kubenswrapper[5099]: I0122 14:15:04.635786 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:04 crc kubenswrapper[5099]: E0122 14:15:04.636149 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:04 crc kubenswrapper[5099]: I0122 14:15:04.636448 5099 scope.go:117] "RemoveContainer" containerID="563099c8fdc6fc3a36ff525a462a2d830742426eacbf17781d1a891dab9018d8" Jan 22 14:15:04 crc kubenswrapper[5099]: E0122 14:15:04.636666 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 14:15:04 crc kubenswrapper[5099]: E0122 14:15:04.641437 5099 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d1321e50bd816\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d1321e50bd816 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:14:24.92408015 +0000 UTC m=+22.631830387,LastTimestamp:2026-01-22 14:15:04.6366288 +0000 UTC m=+62.344379027,Count:8,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:04 crc kubenswrapper[5099]: I0122 14:15:04.686908 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:15:05 crc kubenswrapper[5099]: I0122 14:15:05.688267 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:15:06 crc kubenswrapper[5099]: I0122 14:15:06.687545 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:15:07 crc kubenswrapper[5099]: E0122 14:15:07.361195 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 22 14:15:07 crc kubenswrapper[5099]: I0122 14:15:07.631493 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:07 crc kubenswrapper[5099]: I0122 14:15:07.632762 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:07 crc kubenswrapper[5099]: I0122 14:15:07.632829 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:07 crc kubenswrapper[5099]: I0122 14:15:07.632848 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:07 crc kubenswrapper[5099]: I0122 14:15:07.632917 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 14:15:07 crc kubenswrapper[5099]: E0122 14:15:07.642741 5099 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 22 14:15:07 crc kubenswrapper[5099]: I0122 14:15:07.688467 5099 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:15:07 crc kubenswrapper[5099]: I0122 14:15:07.952291 5099 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-d7nlz" Jan 22 14:15:07 crc kubenswrapper[5099]: I0122 14:15:07.959372 5099 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-d7nlz" Jan 22 14:15:08 crc kubenswrapper[5099]: I0122 14:15:08.024744 5099 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 22 14:15:08 crc kubenswrapper[5099]: I0122 14:15:08.056258 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:15:08 crc kubenswrapper[5099]: I0122 14:15:08.056595 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:08 crc kubenswrapper[5099]: I0122 14:15:08.058646 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:08 crc kubenswrapper[5099]: I0122 14:15:08.058704 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:08 crc kubenswrapper[5099]: I0122 14:15:08.058719 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:08 crc kubenswrapper[5099]: E0122 14:15:08.059297 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:08 crc kubenswrapper[5099]: I0122 14:15:08.059564 5099 scope.go:117] "RemoveContainer" containerID="563099c8fdc6fc3a36ff525a462a2d830742426eacbf17781d1a891dab9018d8" Jan 22 14:15:08 crc kubenswrapper[5099]: E0122 14:15:08.059851 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 14:15:08 crc kubenswrapper[5099]: I0122 14:15:08.598580 5099 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 22 14:15:08 crc kubenswrapper[5099]: I0122 14:15:08.962140 5099 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-02-21 14:10:07 +0000 UTC" deadline="2026-02-17 13:12:47.18779136 +0000 UTC" Jan 22 14:15:08 crc kubenswrapper[5099]: I0122 14:15:08.962248 5099 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="622h57m38.225558598s" Jan 22 14:15:12 crc kubenswrapper[5099]: E0122 14:15:12.820660 5099 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 14:15:14 crc kubenswrapper[5099]: I0122 14:15:14.643240 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:14 crc kubenswrapper[5099]: I0122 14:15:14.647033 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:14 crc kubenswrapper[5099]: I0122 14:15:14.647086 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:14 crc kubenswrapper[5099]: I0122 14:15:14.647102 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:14 crc kubenswrapper[5099]: I0122 14:15:14.647272 5099 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 14:15:14 crc kubenswrapper[5099]: I0122 14:15:14.656648 5099 kubelet_node_status.go:127] "Node was previously registered" node="crc" Jan 22 14:15:14 crc kubenswrapper[5099]: I0122 14:15:14.656947 5099 kubelet_node_status.go:81] "Successfully registered node" node="crc" Jan 22 14:15:14 crc kubenswrapper[5099]: E0122 14:15:14.656978 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 22 14:15:14 crc kubenswrapper[5099]: I0122 14:15:14.659998 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:14 crc kubenswrapper[5099]: I0122 14:15:14.660041 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:14 crc kubenswrapper[5099]: I0122 14:15:14.660052 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:14 crc kubenswrapper[5099]: I0122 14:15:14.660071 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:14 crc kubenswrapper[5099]: I0122 14:15:14.660084 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:14Z","lastTransitionTime":"2026-01-22T14:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:14 crc kubenswrapper[5099]: E0122 14:15:14.673634 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ae36316e-5d25-4478-9e3d-172dd4f263b5\\\",\\\"systemUUID\\\":\\\"80005360-1d39-4f7f-b08e-11268098b583\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:14 crc kubenswrapper[5099]: I0122 14:15:14.681322 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:14 crc kubenswrapper[5099]: I0122 14:15:14.681370 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:14 crc kubenswrapper[5099]: I0122 14:15:14.681380 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:14 crc kubenswrapper[5099]: I0122 14:15:14.681399 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:14 crc kubenswrapper[5099]: I0122 14:15:14.681409 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:14Z","lastTransitionTime":"2026-01-22T14:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:14 crc kubenswrapper[5099]: E0122 14:15:14.691712 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ae36316e-5d25-4478-9e3d-172dd4f263b5\\\",\\\"systemUUID\\\":\\\"80005360-1d39-4f7f-b08e-11268098b583\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:14 crc kubenswrapper[5099]: I0122 14:15:14.701114 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:14 crc kubenswrapper[5099]: I0122 14:15:14.701206 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:14 crc kubenswrapper[5099]: I0122 14:15:14.701221 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:14 crc kubenswrapper[5099]: I0122 14:15:14.701243 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:14 crc kubenswrapper[5099]: I0122 14:15:14.701255 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:14Z","lastTransitionTime":"2026-01-22T14:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:14 crc kubenswrapper[5099]: E0122 14:15:14.711949 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ae36316e-5d25-4478-9e3d-172dd4f263b5\\\",\\\"systemUUID\\\":\\\"80005360-1d39-4f7f-b08e-11268098b583\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:14 crc kubenswrapper[5099]: I0122 14:15:14.719840 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:14 crc kubenswrapper[5099]: I0122 14:15:14.719908 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:14 crc kubenswrapper[5099]: I0122 14:15:14.719933 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:14 crc kubenswrapper[5099]: I0122 14:15:14.719957 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:14 crc kubenswrapper[5099]: I0122 14:15:14.719969 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:14Z","lastTransitionTime":"2026-01-22T14:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:14 crc kubenswrapper[5099]: E0122 14:15:14.730420 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ae36316e-5d25-4478-9e3d-172dd4f263b5\\\",\\\"systemUUID\\\":\\\"80005360-1d39-4f7f-b08e-11268098b583\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:14 crc kubenswrapper[5099]: E0122 14:15:14.730564 5099 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 22 14:15:14 crc kubenswrapper[5099]: E0122 14:15:14.730594 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:14 crc kubenswrapper[5099]: E0122 14:15:14.831612 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:14 crc kubenswrapper[5099]: E0122 14:15:14.932357 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:15 crc kubenswrapper[5099]: E0122 14:15:15.033513 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:15 crc kubenswrapper[5099]: E0122 14:15:15.134372 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:15 crc kubenswrapper[5099]: E0122 14:15:15.235092 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:15 crc kubenswrapper[5099]: E0122 14:15:15.335259 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:15 crc kubenswrapper[5099]: E0122 14:15:15.435827 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:15 crc kubenswrapper[5099]: E0122 14:15:15.536128 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:15 crc kubenswrapper[5099]: E0122 14:15:15.636967 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:15 crc kubenswrapper[5099]: E0122 14:15:15.737812 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:15 crc kubenswrapper[5099]: E0122 14:15:15.838443 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:15 crc kubenswrapper[5099]: E0122 14:15:15.939018 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:16 crc kubenswrapper[5099]: E0122 14:15:16.039325 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:16 crc kubenswrapper[5099]: E0122 14:15:16.140430 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:16 crc kubenswrapper[5099]: E0122 14:15:16.241217 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:16 crc kubenswrapper[5099]: E0122 14:15:16.342116 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:16 crc kubenswrapper[5099]: E0122 14:15:16.442875 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:16 crc kubenswrapper[5099]: E0122 14:15:16.543913 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:16 crc kubenswrapper[5099]: E0122 14:15:16.644288 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:16 crc kubenswrapper[5099]: E0122 14:15:16.745433 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:16 crc kubenswrapper[5099]: E0122 14:15:16.846116 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:16 crc kubenswrapper[5099]: E0122 14:15:16.947153 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:17 crc kubenswrapper[5099]: E0122 14:15:17.047513 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:17 crc kubenswrapper[5099]: E0122 14:15:17.148196 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:17 crc kubenswrapper[5099]: E0122 14:15:17.249728 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:17 crc kubenswrapper[5099]: E0122 14:15:17.350778 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:17 crc kubenswrapper[5099]: E0122 14:15:17.451895 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:17 crc kubenswrapper[5099]: E0122 14:15:17.553276 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:17 crc kubenswrapper[5099]: E0122 14:15:17.654131 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:17 crc kubenswrapper[5099]: E0122 14:15:17.754972 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:17 crc kubenswrapper[5099]: E0122 14:15:17.855911 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:17 crc kubenswrapper[5099]: E0122 14:15:17.956849 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:18 crc kubenswrapper[5099]: E0122 14:15:18.057357 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:18 crc kubenswrapper[5099]: E0122 14:15:18.158447 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:18 crc kubenswrapper[5099]: E0122 14:15:18.259424 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:18 crc kubenswrapper[5099]: E0122 14:15:18.360611 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:18 crc kubenswrapper[5099]: E0122 14:15:18.461105 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:18 crc kubenswrapper[5099]: E0122 14:15:18.561317 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:18 crc kubenswrapper[5099]: E0122 14:15:18.662220 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:18 crc kubenswrapper[5099]: E0122 14:15:18.763100 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:18 crc kubenswrapper[5099]: E0122 14:15:18.863220 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:18 crc kubenswrapper[5099]: E0122 14:15:18.964236 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:19 crc kubenswrapper[5099]: E0122 14:15:19.065386 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:19 crc kubenswrapper[5099]: E0122 14:15:19.166461 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:19 crc kubenswrapper[5099]: E0122 14:15:19.266814 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:19 crc kubenswrapper[5099]: E0122 14:15:19.367968 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:19 crc kubenswrapper[5099]: E0122 14:15:19.468985 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:19 crc kubenswrapper[5099]: E0122 14:15:19.570044 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:19 crc kubenswrapper[5099]: E0122 14:15:19.671068 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:19 crc kubenswrapper[5099]: E0122 14:15:19.772199 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:19 crc kubenswrapper[5099]: E0122 14:15:19.872949 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:19 crc kubenswrapper[5099]: E0122 14:15:19.973500 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:20 crc kubenswrapper[5099]: E0122 14:15:20.074695 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:20 crc kubenswrapper[5099]: E0122 14:15:20.175668 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:20 crc kubenswrapper[5099]: E0122 14:15:20.275836 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:20 crc kubenswrapper[5099]: E0122 14:15:20.376368 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:20 crc kubenswrapper[5099]: E0122 14:15:20.478129 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:20 crc kubenswrapper[5099]: E0122 14:15:20.578799 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:20 crc kubenswrapper[5099]: E0122 14:15:20.679417 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:20 crc kubenswrapper[5099]: E0122 14:15:20.780140 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:20 crc kubenswrapper[5099]: E0122 14:15:20.880869 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:20 crc kubenswrapper[5099]: E0122 14:15:20.981621 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:21 crc kubenswrapper[5099]: E0122 14:15:21.082372 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:21 crc kubenswrapper[5099]: E0122 14:15:21.183394 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:21 crc kubenswrapper[5099]: E0122 14:15:21.283764 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:21 crc kubenswrapper[5099]: E0122 14:15:21.384383 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:21 crc kubenswrapper[5099]: E0122 14:15:21.485051 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:21 crc kubenswrapper[5099]: E0122 14:15:21.585976 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:21 crc kubenswrapper[5099]: E0122 14:15:21.686347 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:21 crc kubenswrapper[5099]: I0122 14:15:21.761033 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:21 crc kubenswrapper[5099]: I0122 14:15:21.761961 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:21 crc kubenswrapper[5099]: I0122 14:15:21.762034 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:21 crc kubenswrapper[5099]: I0122 14:15:21.762064 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:21 crc kubenswrapper[5099]: E0122 14:15:21.762698 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:21 crc kubenswrapper[5099]: I0122 14:15:21.762955 5099 scope.go:117] "RemoveContainer" containerID="563099c8fdc6fc3a36ff525a462a2d830742426eacbf17781d1a891dab9018d8" Jan 22 14:15:21 crc kubenswrapper[5099]: E0122 14:15:21.763230 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 14:15:21 crc kubenswrapper[5099]: E0122 14:15:21.787524 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:21 crc kubenswrapper[5099]: E0122 14:15:21.888142 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:21 crc kubenswrapper[5099]: E0122 14:15:21.989257 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:22 crc kubenswrapper[5099]: E0122 14:15:22.090433 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:22 crc kubenswrapper[5099]: E0122 14:15:22.191445 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:22 crc kubenswrapper[5099]: E0122 14:15:22.291563 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:22 crc kubenswrapper[5099]: E0122 14:15:22.392729 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:22 crc kubenswrapper[5099]: E0122 14:15:22.493200 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:22 crc kubenswrapper[5099]: E0122 14:15:22.593384 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:22 crc kubenswrapper[5099]: E0122 14:15:22.693564 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:22 crc kubenswrapper[5099]: E0122 14:15:22.794671 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:22 crc kubenswrapper[5099]: E0122 14:15:22.821615 5099 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 14:15:22 crc kubenswrapper[5099]: E0122 14:15:22.895013 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:22 crc kubenswrapper[5099]: E0122 14:15:22.995547 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:23 crc kubenswrapper[5099]: E0122 14:15:23.096236 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:23 crc kubenswrapper[5099]: E0122 14:15:23.197094 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:23 crc kubenswrapper[5099]: E0122 14:15:23.297952 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:23 crc kubenswrapper[5099]: E0122 14:15:23.399131 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:23 crc kubenswrapper[5099]: E0122 14:15:23.499323 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:23 crc kubenswrapper[5099]: E0122 14:15:23.599484 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:23 crc kubenswrapper[5099]: E0122 14:15:23.699813 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:23 crc kubenswrapper[5099]: E0122 14:15:23.800463 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:23 crc kubenswrapper[5099]: E0122 14:15:23.900682 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:24 crc kubenswrapper[5099]: E0122 14:15:24.001789 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:24 crc kubenswrapper[5099]: E0122 14:15:24.102146 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:24 crc kubenswrapper[5099]: E0122 14:15:24.202417 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:24 crc kubenswrapper[5099]: E0122 14:15:24.302992 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:24 crc kubenswrapper[5099]: E0122 14:15:24.403401 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:24 crc kubenswrapper[5099]: E0122 14:15:24.503598 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:24 crc kubenswrapper[5099]: E0122 14:15:24.604408 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:24 crc kubenswrapper[5099]: E0122 14:15:24.704733 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:24 crc kubenswrapper[5099]: E0122 14:15:24.805343 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:24 crc kubenswrapper[5099]: E0122 14:15:24.905994 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:24 crc kubenswrapper[5099]: E0122 14:15:24.980742 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 22 14:15:24 crc kubenswrapper[5099]: I0122 14:15:24.984747 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:24 crc kubenswrapper[5099]: I0122 14:15:24.984797 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:24 crc kubenswrapper[5099]: I0122 14:15:24.984809 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:24 crc kubenswrapper[5099]: I0122 14:15:24.984825 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:24 crc kubenswrapper[5099]: I0122 14:15:24.984838 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:24Z","lastTransitionTime":"2026-01-22T14:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:24 crc kubenswrapper[5099]: E0122 14:15:24.994000 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ae36316e-5d25-4478-9e3d-172dd4f263b5\\\",\\\"systemUUID\\\":\\\"80005360-1d39-4f7f-b08e-11268098b583\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:25 crc kubenswrapper[5099]: I0122 14:15:25.000240 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:25 crc kubenswrapper[5099]: I0122 14:15:25.000282 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:25 crc kubenswrapper[5099]: I0122 14:15:25.000297 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:25 crc kubenswrapper[5099]: I0122 14:15:25.000314 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:25 crc kubenswrapper[5099]: I0122 14:15:25.000324 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:25Z","lastTransitionTime":"2026-01-22T14:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:25 crc kubenswrapper[5099]: E0122 14:15:25.015923 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ae36316e-5d25-4478-9e3d-172dd4f263b5\\\",\\\"systemUUID\\\":\\\"80005360-1d39-4f7f-b08e-11268098b583\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:25 crc kubenswrapper[5099]: I0122 14:15:25.024881 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:25 crc kubenswrapper[5099]: I0122 14:15:25.024926 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:25 crc kubenswrapper[5099]: I0122 14:15:25.024936 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:25 crc kubenswrapper[5099]: I0122 14:15:25.024950 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:25 crc kubenswrapper[5099]: I0122 14:15:25.024959 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:25Z","lastTransitionTime":"2026-01-22T14:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:25 crc kubenswrapper[5099]: E0122 14:15:25.040842 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ae36316e-5d25-4478-9e3d-172dd4f263b5\\\",\\\"systemUUID\\\":\\\"80005360-1d39-4f7f-b08e-11268098b583\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:25 crc kubenswrapper[5099]: I0122 14:15:25.048533 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:25 crc kubenswrapper[5099]: I0122 14:15:25.048612 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:25 crc kubenswrapper[5099]: I0122 14:15:25.048627 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:25 crc kubenswrapper[5099]: I0122 14:15:25.048646 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:25 crc kubenswrapper[5099]: I0122 14:15:25.048681 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:25Z","lastTransitionTime":"2026-01-22T14:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:25 crc kubenswrapper[5099]: E0122 14:15:25.061041 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ae36316e-5d25-4478-9e3d-172dd4f263b5\\\",\\\"systemUUID\\\":\\\"80005360-1d39-4f7f-b08e-11268098b583\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:25 crc kubenswrapper[5099]: E0122 14:15:25.061223 5099 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 22 14:15:25 crc kubenswrapper[5099]: E0122 14:15:25.061254 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:25 crc kubenswrapper[5099]: E0122 14:15:25.161881 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:25 crc kubenswrapper[5099]: E0122 14:15:25.263351 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:25 crc kubenswrapper[5099]: E0122 14:15:25.363797 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:25 crc kubenswrapper[5099]: E0122 14:15:25.465101 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:25 crc kubenswrapper[5099]: E0122 14:15:25.565783 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:25 crc kubenswrapper[5099]: E0122 14:15:25.666147 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:25 crc kubenswrapper[5099]: I0122 14:15:25.760796 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:25 crc kubenswrapper[5099]: I0122 14:15:25.760806 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:25 crc kubenswrapper[5099]: I0122 14:15:25.762682 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:25 crc kubenswrapper[5099]: I0122 14:15:25.762721 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:25 crc kubenswrapper[5099]: I0122 14:15:25.762731 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:25 crc kubenswrapper[5099]: I0122 14:15:25.762855 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:25 crc kubenswrapper[5099]: I0122 14:15:25.762897 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:25 crc kubenswrapper[5099]: I0122 14:15:25.762910 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:25 crc kubenswrapper[5099]: E0122 14:15:25.763114 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:25 crc kubenswrapper[5099]: E0122 14:15:25.763608 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:25 crc kubenswrapper[5099]: E0122 14:15:25.766511 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:25 crc kubenswrapper[5099]: E0122 14:15:25.867452 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:25 crc kubenswrapper[5099]: E0122 14:15:25.967865 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:26 crc kubenswrapper[5099]: E0122 14:15:26.068505 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:26 crc kubenswrapper[5099]: E0122 14:15:26.168653 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:26 crc kubenswrapper[5099]: E0122 14:15:26.268993 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:26 crc kubenswrapper[5099]: E0122 14:15:26.369594 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:26 crc kubenswrapper[5099]: E0122 14:15:26.469741 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:26 crc kubenswrapper[5099]: E0122 14:15:26.570768 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:26 crc kubenswrapper[5099]: E0122 14:15:26.671241 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:26 crc kubenswrapper[5099]: E0122 14:15:26.771796 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:26 crc kubenswrapper[5099]: E0122 14:15:26.872554 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:26 crc kubenswrapper[5099]: E0122 14:15:26.973578 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:27 crc kubenswrapper[5099]: E0122 14:15:27.074034 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:27 crc kubenswrapper[5099]: E0122 14:15:27.174744 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:27 crc kubenswrapper[5099]: E0122 14:15:27.274847 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:27 crc kubenswrapper[5099]: E0122 14:15:27.375755 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:27 crc kubenswrapper[5099]: E0122 14:15:27.476262 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:27 crc kubenswrapper[5099]: E0122 14:15:27.576911 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:27 crc kubenswrapper[5099]: E0122 14:15:27.678006 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:27 crc kubenswrapper[5099]: E0122 14:15:27.779196 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:27 crc kubenswrapper[5099]: E0122 14:15:27.879689 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:27 crc kubenswrapper[5099]: E0122 14:15:27.980268 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:28 crc kubenswrapper[5099]: E0122 14:15:28.080457 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:28 crc kubenswrapper[5099]: E0122 14:15:28.180608 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:28 crc kubenswrapper[5099]: E0122 14:15:28.281192 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:28 crc kubenswrapper[5099]: E0122 14:15:28.382091 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:28 crc kubenswrapper[5099]: E0122 14:15:28.482928 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:28 crc kubenswrapper[5099]: E0122 14:15:28.583361 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:28 crc kubenswrapper[5099]: E0122 14:15:28.683863 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:28 crc kubenswrapper[5099]: E0122 14:15:28.785068 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:28 crc kubenswrapper[5099]: E0122 14:15:28.885583 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:28 crc kubenswrapper[5099]: E0122 14:15:28.986756 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:29 crc kubenswrapper[5099]: E0122 14:15:29.087339 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:29 crc kubenswrapper[5099]: E0122 14:15:29.187932 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:29 crc kubenswrapper[5099]: E0122 14:15:29.289067 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:29 crc kubenswrapper[5099]: E0122 14:15:29.389489 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:29 crc kubenswrapper[5099]: E0122 14:15:29.490584 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:29 crc kubenswrapper[5099]: E0122 14:15:29.591610 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:29 crc kubenswrapper[5099]: E0122 14:15:29.692347 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:29 crc kubenswrapper[5099]: E0122 14:15:29.793241 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:29 crc kubenswrapper[5099]: E0122 14:15:29.893528 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:29 crc kubenswrapper[5099]: I0122 14:15:29.920245 5099 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 22 14:15:29 crc kubenswrapper[5099]: E0122 14:15:29.993671 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:30 crc kubenswrapper[5099]: E0122 14:15:30.094361 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:30 crc kubenswrapper[5099]: E0122 14:15:30.195176 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:30 crc kubenswrapper[5099]: E0122 14:15:30.295953 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:30 crc kubenswrapper[5099]: E0122 14:15:30.397157 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:30 crc kubenswrapper[5099]: E0122 14:15:30.497377 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:30 crc kubenswrapper[5099]: E0122 14:15:30.597847 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:30 crc kubenswrapper[5099]: E0122 14:15:30.698584 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:30 crc kubenswrapper[5099]: E0122 14:15:30.798946 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:30 crc kubenswrapper[5099]: E0122 14:15:30.899348 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:31 crc kubenswrapper[5099]: E0122 14:15:30.999953 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:31 crc kubenswrapper[5099]: E0122 14:15:31.100512 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:31 crc kubenswrapper[5099]: E0122 14:15:31.200990 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:31 crc kubenswrapper[5099]: E0122 14:15:31.301130 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:31 crc kubenswrapper[5099]: E0122 14:15:31.402271 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:31 crc kubenswrapper[5099]: E0122 14:15:31.502400 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:31 crc kubenswrapper[5099]: E0122 14:15:31.603332 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:31 crc kubenswrapper[5099]: E0122 14:15:31.704065 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:31 crc kubenswrapper[5099]: E0122 14:15:31.804986 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:31 crc kubenswrapper[5099]: E0122 14:15:31.905401 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:31 crc kubenswrapper[5099]: I0122 14:15:31.987112 5099 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 22 14:15:32 crc kubenswrapper[5099]: E0122 14:15:32.006060 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:32 crc kubenswrapper[5099]: E0122 14:15:32.106483 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:32 crc kubenswrapper[5099]: E0122 14:15:32.207075 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:32 crc kubenswrapper[5099]: E0122 14:15:32.307527 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:32 crc kubenswrapper[5099]: E0122 14:15:32.407699 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:32 crc kubenswrapper[5099]: E0122 14:15:32.508538 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:32 crc kubenswrapper[5099]: E0122 14:15:32.609224 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:32 crc kubenswrapper[5099]: E0122 14:15:32.709636 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:32 crc kubenswrapper[5099]: I0122 14:15:32.760959 5099 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:32 crc kubenswrapper[5099]: I0122 14:15:32.762616 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:32 crc kubenswrapper[5099]: I0122 14:15:32.762694 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:32 crc kubenswrapper[5099]: I0122 14:15:32.762705 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:32 crc kubenswrapper[5099]: E0122 14:15:32.763340 5099 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:32 crc kubenswrapper[5099]: I0122 14:15:32.763652 5099 scope.go:117] "RemoveContainer" containerID="563099c8fdc6fc3a36ff525a462a2d830742426eacbf17781d1a891dab9018d8" Jan 22 14:15:32 crc kubenswrapper[5099]: E0122 14:15:32.763939 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 14:15:32 crc kubenswrapper[5099]: E0122 14:15:32.810112 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:32 crc kubenswrapper[5099]: E0122 14:15:32.822668 5099 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 14:15:32 crc kubenswrapper[5099]: E0122 14:15:32.910565 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:33 crc kubenswrapper[5099]: E0122 14:15:33.011106 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:33 crc kubenswrapper[5099]: E0122 14:15:33.111541 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:33 crc kubenswrapper[5099]: E0122 14:15:33.212628 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:33 crc kubenswrapper[5099]: E0122 14:15:33.313705 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:33 crc kubenswrapper[5099]: E0122 14:15:33.413876 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:33 crc kubenswrapper[5099]: E0122 14:15:33.514244 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:33 crc kubenswrapper[5099]: E0122 14:15:33.614375 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:33 crc kubenswrapper[5099]: E0122 14:15:33.715531 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:33 crc kubenswrapper[5099]: E0122 14:15:33.816642 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:33 crc kubenswrapper[5099]: E0122 14:15:33.916765 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:34 crc kubenswrapper[5099]: E0122 14:15:34.016907 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:34 crc kubenswrapper[5099]: E0122 14:15:34.117520 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:34 crc kubenswrapper[5099]: E0122 14:15:34.218543 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:34 crc kubenswrapper[5099]: E0122 14:15:34.319534 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:34 crc kubenswrapper[5099]: E0122 14:15:34.419988 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:34 crc kubenswrapper[5099]: E0122 14:15:34.520857 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:34 crc kubenswrapper[5099]: E0122 14:15:34.621312 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:34 crc kubenswrapper[5099]: E0122 14:15:34.722111 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:34 crc kubenswrapper[5099]: E0122 14:15:34.823241 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:34 crc kubenswrapper[5099]: E0122 14:15:34.923707 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:35 crc kubenswrapper[5099]: E0122 14:15:35.024036 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:35 crc kubenswrapper[5099]: E0122 14:15:35.124214 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:35 crc kubenswrapper[5099]: E0122 14:15:35.224837 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:35 crc kubenswrapper[5099]: E0122 14:15:35.326051 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:35 crc kubenswrapper[5099]: E0122 14:15:35.384949 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 22 14:15:35 crc kubenswrapper[5099]: I0122 14:15:35.390360 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:35 crc kubenswrapper[5099]: I0122 14:15:35.390408 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:35 crc kubenswrapper[5099]: I0122 14:15:35.390423 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:35 crc kubenswrapper[5099]: I0122 14:15:35.390443 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:35 crc kubenswrapper[5099]: I0122 14:15:35.390457 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:35Z","lastTransitionTime":"2026-01-22T14:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:35 crc kubenswrapper[5099]: E0122 14:15:35.402885 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ae36316e-5d25-4478-9e3d-172dd4f263b5\\\",\\\"systemUUID\\\":\\\"80005360-1d39-4f7f-b08e-11268098b583\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:35 crc kubenswrapper[5099]: I0122 14:15:35.407482 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:35 crc kubenswrapper[5099]: I0122 14:15:35.407532 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:35 crc kubenswrapper[5099]: I0122 14:15:35.407547 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:35 crc kubenswrapper[5099]: I0122 14:15:35.407564 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:35 crc kubenswrapper[5099]: I0122 14:15:35.407574 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:35Z","lastTransitionTime":"2026-01-22T14:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:35 crc kubenswrapper[5099]: E0122 14:15:35.419980 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ae36316e-5d25-4478-9e3d-172dd4f263b5\\\",\\\"systemUUID\\\":\\\"80005360-1d39-4f7f-b08e-11268098b583\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:35 crc kubenswrapper[5099]: I0122 14:15:35.425100 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:35 crc kubenswrapper[5099]: I0122 14:15:35.425156 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:35 crc kubenswrapper[5099]: I0122 14:15:35.425184 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:35 crc kubenswrapper[5099]: I0122 14:15:35.425205 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:35 crc kubenswrapper[5099]: I0122 14:15:35.425218 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:35Z","lastTransitionTime":"2026-01-22T14:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:35 crc kubenswrapper[5099]: E0122 14:15:35.436985 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ae36316e-5d25-4478-9e3d-172dd4f263b5\\\",\\\"systemUUID\\\":\\\"80005360-1d39-4f7f-b08e-11268098b583\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:35 crc kubenswrapper[5099]: I0122 14:15:35.440460 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:35 crc kubenswrapper[5099]: I0122 14:15:35.440490 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:35 crc kubenswrapper[5099]: I0122 14:15:35.440500 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:35 crc kubenswrapper[5099]: I0122 14:15:35.440512 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:35 crc kubenswrapper[5099]: I0122 14:15:35.440522 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:35Z","lastTransitionTime":"2026-01-22T14:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:35 crc kubenswrapper[5099]: E0122 14:15:35.452811 5099 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:15:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ae36316e-5d25-4478-9e3d-172dd4f263b5\\\",\\\"systemUUID\\\":\\\"80005360-1d39-4f7f-b08e-11268098b583\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:35 crc kubenswrapper[5099]: E0122 14:15:35.452960 5099 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 22 14:15:35 crc kubenswrapper[5099]: E0122 14:15:35.452988 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:35 crc kubenswrapper[5099]: E0122 14:15:35.553736 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:35 crc kubenswrapper[5099]: E0122 14:15:35.654026 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:35 crc kubenswrapper[5099]: E0122 14:15:35.754144 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:35 crc kubenswrapper[5099]: E0122 14:15:35.854583 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:35 crc kubenswrapper[5099]: E0122 14:15:35.955294 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:36 crc kubenswrapper[5099]: E0122 14:15:36.055882 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:36 crc kubenswrapper[5099]: E0122 14:15:36.156054 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:36 crc kubenswrapper[5099]: E0122 14:15:36.257058 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:36 crc kubenswrapper[5099]: E0122 14:15:36.357460 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:36 crc kubenswrapper[5099]: E0122 14:15:36.458620 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:36 crc kubenswrapper[5099]: E0122 14:15:36.558827 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:36 crc kubenswrapper[5099]: E0122 14:15:36.659975 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:36 crc kubenswrapper[5099]: E0122 14:15:36.760294 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:36 crc kubenswrapper[5099]: E0122 14:15:36.861411 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:36 crc kubenswrapper[5099]: E0122 14:15:36.961867 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:37 crc kubenswrapper[5099]: E0122 14:15:37.062972 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:37 crc kubenswrapper[5099]: E0122 14:15:37.163203 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:37 crc kubenswrapper[5099]: E0122 14:15:37.263804 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:37 crc kubenswrapper[5099]: E0122 14:15:37.364488 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:37 crc kubenswrapper[5099]: E0122 14:15:37.464855 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:37 crc kubenswrapper[5099]: E0122 14:15:37.566017 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:37 crc kubenswrapper[5099]: E0122 14:15:37.666600 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:37 crc kubenswrapper[5099]: E0122 14:15:37.767488 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:37 crc kubenswrapper[5099]: E0122 14:15:37.868706 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:37 crc kubenswrapper[5099]: E0122 14:15:37.969565 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:38 crc kubenswrapper[5099]: E0122 14:15:38.069934 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:38 crc kubenswrapper[5099]: E0122 14:15:38.170706 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:38 crc kubenswrapper[5099]: E0122 14:15:38.271268 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:38 crc kubenswrapper[5099]: E0122 14:15:38.372196 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:38 crc kubenswrapper[5099]: E0122 14:15:38.473011 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:38 crc kubenswrapper[5099]: E0122 14:15:38.574021 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:38 crc kubenswrapper[5099]: E0122 14:15:38.674490 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:38 crc kubenswrapper[5099]: E0122 14:15:38.774748 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:38 crc kubenswrapper[5099]: E0122 14:15:38.875833 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:38 crc kubenswrapper[5099]: E0122 14:15:38.976786 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:39 crc kubenswrapper[5099]: E0122 14:15:39.076976 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:39 crc kubenswrapper[5099]: E0122 14:15:39.177605 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:39 crc kubenswrapper[5099]: E0122 14:15:39.278623 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:39 crc kubenswrapper[5099]: E0122 14:15:39.378946 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:39 crc kubenswrapper[5099]: E0122 14:15:39.479783 5099 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.497890 5099 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.502550 5099 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.513722 5099 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.521433 5099 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.582581 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.582625 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.582637 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.582656 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.582666 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:39Z","lastTransitionTime":"2026-01-22T14:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.624986 5099 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.685276 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.685380 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.685416 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.685451 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.685479 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:39Z","lastTransitionTime":"2026-01-22T14:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.725957 5099 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.732860 5099 apiserver.go:52] "Watching apiserver" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.742146 5099 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.743021 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc","openshift-multus/multus-additional-cni-plugins-m2v9k","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/machine-config-daemon-88wst","openshift-multus/multus-ddjv2","openshift-multus/network-metrics-daemon-6qncx","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-network-diagnostics/network-check-target-fhkjl","openshift-network-operator/iptables-alerter-5jnd7","openshift-dns/node-resolver-hthqk","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-node-identity/network-node-identity-dgvkt","openshift-ovn-kubernetes/ovnkube-node-rwglj","openshift-image-registry/node-ca-4tdb8","openshift-kube-apiserver/kube-apiserver-crc","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jxk5p"] Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.745743 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.746251 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 14:15:39 crc kubenswrapper[5099]: E0122 14:15:39.746448 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.747472 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 14:15:39 crc kubenswrapper[5099]: E0122 14:15:39.747592 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.750177 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.754326 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.755301 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:15:39 crc kubenswrapper[5099]: E0122 14:15:39.755671 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.757257 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.759272 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.759798 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.759953 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.759802 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.760075 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.760245 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.760380 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.760392 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.762409 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hthqk" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.765252 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.765266 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.766022 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.769672 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-88wst" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.771262 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.771759 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.771888 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.772112 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.772914 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.776875 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6qncx" Jan 22 14:15:39 crc kubenswrapper[5099]: E0122 14:15:39.776973 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6qncx" podUID="47a33b1f-9d8a-4a87-9d5b-15c2b36959df" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.781536 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-ddjv2" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.783682 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.783732 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.783973 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.784099 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.784110 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.784888 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.785828 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.787813 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.787866 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.787883 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.787906 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.787924 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:39Z","lastTransitionTime":"2026-01-22T14:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.788546 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.788556 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-m2v9k" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.788571 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.788717 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.788724 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.789013 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.789639 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.789941 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.789939 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.790276 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.790345 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.791016 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-4tdb8" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.792732 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.793258 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.793494 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.793794 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.795481 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jxk5p" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.796798 5099 scope.go:117] "RemoveContainer" containerID="563099c8fdc6fc3a36ff525a462a2d830742426eacbf17781d1a891dab9018d8" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.798656 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.798872 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.800709 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-hthqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42134f25-23a2-4498-8506-81215398022e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qv2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:15:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hthqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.808729 5099 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.812753 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.826942 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.828633 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.839833 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6qncx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47a33b1f-9d8a-4a87-9d5b-15c2b36959df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:15:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6qncx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.855141 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.866109 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-88wst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4620190f-fea2-4e88-8a94-8e1bd1e1db12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s9qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s9qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:15:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-88wst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.877478 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.877524 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.877546 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.877676 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.877723 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.877700 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-ddjv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a360a173-34a8-483e-8e75-c23a59b15b83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzznh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:15:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ddjv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.877758 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.877980 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.878013 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.878034 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.878058 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.878073 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.878097 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.878092 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.878114 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.878131 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.878150 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.878183 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.878199 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.878217 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.878235 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.878251 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.878268 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.878284 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.878299 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.878318 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.878335 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.878353 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.878370 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.878385 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.878402 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.878417 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.878431 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.878447 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.878464 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.878481 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.878495 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.878513 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.878527 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.878544 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.878559 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.878636 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.878652 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:15:39 crc kubenswrapper[5099]: E0122 14:15:39.878774 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:15:40.378724986 +0000 UTC m=+98.086475213 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.878806 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.878994 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.879137 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.879223 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.879412 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.879438 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.879652 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.879665 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.879762 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.879880 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.879912 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.879979 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.880013 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.880038 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.880063 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.880091 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.880121 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.880145 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.880195 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.880221 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.880249 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.880276 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.880302 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.880326 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.880351 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.880375 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.880401 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.880426 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.880453 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.881259 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.881323 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.881349 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.881378 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.881404 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.881431 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.881456 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.881482 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.881540 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.881568 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.881598 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.881625 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.881647 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.881675 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.881700 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.881732 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.881756 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.881778 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.881803 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.881825 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.882688 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.882725 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.882776 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.882797 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.882816 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.882837 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.882857 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.882875 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.882901 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.882928 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.882949 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.882972 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.882990 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.883008 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.883538 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.879783 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.879924 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.879943 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.880264 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.880382 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.880546 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.880688 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.880887 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.880891 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.881267 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.881464 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.881186 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.881513 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.881548 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.881746 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.881922 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.882081 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.882115 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.882135 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.882387 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.882612 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.882477 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.882647 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.882708 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.882715 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.882862 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.883106 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.883114 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.883251 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.883704 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.883688 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.884931 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.884988 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.885044 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.885049 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.885073 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.885063 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.885219 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.885227 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.885238 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.885281 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.885428 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.885451 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.885816 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.885946 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.885949 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.886699 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.886778 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.886804 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.886826 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.886848 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.886867 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.886895 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.886918 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.886941 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.886962 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.886984 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887004 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887021 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887037 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887055 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887056 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887114 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887072 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887196 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887219 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887239 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887259 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887279 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887299 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887322 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887353 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887374 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887392 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887409 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887427 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887472 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887497 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887515 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887533 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887549 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887565 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887583 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887601 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887616 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887633 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887652 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887671 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887688 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887705 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887721 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887744 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887761 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887780 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887797 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887817 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887834 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887853 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887870 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887905 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887923 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887942 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887961 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887980 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887998 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.888015 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.888032 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.888049 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.888066 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.888084 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.888102 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.888124 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.888142 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.888200 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.888221 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.888239 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.888257 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.888275 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.888299 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.888323 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.888345 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.888367 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.888387 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.888409 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.888427 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.888444 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.888461 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.888484 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.888502 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.888521 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.888540 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.888557 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.888576 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.888984 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.889421 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.889476 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.889510 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.889543 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887252 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.890003 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.890044 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.890373 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.890776 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.890976 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.891057 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.891447 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.891543 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.891597 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.891621 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.891626 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.891643 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.891715 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.891748 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.891770 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.891791 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.891798 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.891913 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.892089 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.892139 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.892150 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887419 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887429 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.889273 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.889362 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.889841 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.889886 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.889904 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.889913 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.892427 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.892587 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.892747 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.892826 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.892823 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.892987 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.893227 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.893260 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.893267 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.893557 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.893654 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.893691 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.887301 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.894031 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.893792 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.894315 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.894193 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.894441 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.894441 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.894680 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.894826 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.894864 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.894884 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.894927 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.895188 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.895207 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.894952 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.895502 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.895733 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.895770 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.895799 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.895825 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.895932 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.896007 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.896037 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.896071 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.896091 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.896150 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.896232 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.896275 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.896297 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.896318 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.896365 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.896384 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.896401 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.896440 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.896460 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.896477 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.896516 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.896538 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.896556 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.896596 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.896618 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.896757 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.896777 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.896866 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.896918 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-host-cni-netd\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.896943 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.896989 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/42134f25-23a2-4498-8506-81215398022e-hosts-file\") pod \"node-resolver-hthqk\" (UID: \"42134f25-23a2-4498-8506-81215398022e\") " pod="openshift-dns/node-resolver-hthqk" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.897010 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4620190f-fea2-4e88-8a94-8e1bd1e1db12-rootfs\") pod \"machine-config-daemon-88wst\" (UID: \"4620190f-fea2-4e88-8a94-8e1bd1e1db12\") " pod="openshift-machine-config-operator/machine-config-daemon-88wst" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.897037 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.897029 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s9qj\" (UniqueName: \"kubernetes.io/projected/4620190f-fea2-4e88-8a94-8e1bd1e1db12-kube-api-access-8s9qj\") pod \"machine-config-daemon-88wst\" (UID: \"4620190f-fea2-4e88-8a94-8e1bd1e1db12\") " pod="openshift-machine-config-operator/machine-config-daemon-88wst" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.897184 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.897295 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.897579 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.897605 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.897613 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.897631 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.897622 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.897641 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:39Z","lastTransitionTime":"2026-01-22T14:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.897695 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.897590 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.897995 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.898017 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.898051 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.898435 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.898983 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.899228 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.899390 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.899474 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.899585 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.899604 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.899740 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.900128 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.900148 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.900194 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.900196 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.900224 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.900223 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.900294 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.900626 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.901030 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.901396 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.901450 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.901558 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.899686 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.901731 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.901766 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.901817 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.901849 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.902279 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.902363 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.902664 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxkc6\" (UniqueName: \"kubernetes.io/projected/47a33b1f-9d8a-4a87-9d5b-15c2b36959df-kube-api-access-wxkc6\") pod \"network-metrics-daemon-6qncx\" (UID: \"47a33b1f-9d8a-4a87-9d5b-15c2b36959df\") " pod="openshift-multus/network-metrics-daemon-6qncx" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.903591 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-host-run-netns\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.903771 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-host-run-ovn-kubernetes\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.903799 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t448v\" (UniqueName: \"kubernetes.io/projected/be66d38d-792d-4cff-a545-f5470d04e4b1-kube-api-access-t448v\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.903824 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/35840520-7eec-4a39-8370-7bd619fcf74b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-m2v9k\" (UID: \"35840520-7eec-4a39-8370-7bd619fcf74b\") " pod="openshift-multus/multus-additional-cni-plugins-m2v9k" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.903850 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/dfb21daf-e0ba-4deb-9f8c-45645740ec01-serviceca\") pod \"node-ca-4tdb8\" (UID: \"dfb21daf-e0ba-4deb-9f8c-45645740ec01\") " pod="openshift-image-registry/node-ca-4tdb8" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.903882 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-multus-conf-dir\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.903922 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.903948 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-host-run-k8s-cni-cncf-io\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.903971 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/35840520-7eec-4a39-8370-7bd619fcf74b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-m2v9k\" (UID: \"35840520-7eec-4a39-8370-7bd619fcf74b\") " pod="openshift-multus/multus-additional-cni-plugins-m2v9k" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.903998 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-host-cni-bin\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.904024 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d146eaed-a126-49d9-9dc2-3cebf5ecd5b9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-jxk5p\" (UID: \"d146eaed-a126-49d9-9dc2-3cebf5ecd5b9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jxk5p" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.904050 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.904078 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.902669 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.902838 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.902912 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.903033 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.903312 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.903335 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.902825 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.903586 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.903571 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.904216 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.904374 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.904476 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.904671 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.904699 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.904724 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.905052 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.905377 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.905391 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.905492 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.906623 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.905540 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.904995 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.905844 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.905987 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.906147 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.906420 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.906790 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.906934 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.907152 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-host-slash\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.907190 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.907399 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-run-systemd\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.907442 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-var-lib-openvswitch\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.907461 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-etc-openvswitch\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.907481 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.907503 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-run-openvswitch\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.907522 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-run-ovn\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.907542 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ml64\" (UniqueName: \"kubernetes.io/projected/d146eaed-a126-49d9-9dc2-3cebf5ecd5b9-kube-api-access-2ml64\") pod \"ovnkube-control-plane-57b78d8988-jxk5p\" (UID: \"d146eaed-a126-49d9-9dc2-3cebf5ecd5b9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jxk5p" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.907566 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.907587 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.907605 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-host-var-lib-cni-multus\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.907625 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/35840520-7eec-4a39-8370-7bd619fcf74b-os-release\") pod \"multus-additional-cni-plugins-m2v9k\" (UID: \"35840520-7eec-4a39-8370-7bd619fcf74b\") " pod="openshift-multus/multus-additional-cni-plugins-m2v9k" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.907647 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a360a173-34a8-483e-8e75-c23a59b15b83-multus-daemon-config\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.907703 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/35840520-7eec-4a39-8370-7bd619fcf74b-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-m2v9k\" (UID: \"35840520-7eec-4a39-8370-7bd619fcf74b\") " pod="openshift-multus/multus-additional-cni-plugins-m2v9k" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.907727 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4620190f-fea2-4e88-8a94-8e1bd1e1db12-proxy-tls\") pod \"machine-config-daemon-88wst\" (UID: \"4620190f-fea2-4e88-8a94-8e1bd1e1db12\") " pod="openshift-machine-config-operator/machine-config-daemon-88wst" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.907808 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4620190f-fea2-4e88-8a94-8e1bd1e1db12-mcd-auth-proxy-config\") pod \"machine-config-daemon-88wst\" (UID: \"4620190f-fea2-4e88-8a94-8e1bd1e1db12\") " pod="openshift-machine-config-operator/machine-config-daemon-88wst" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.908073 5099 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.908341 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/be66d38d-792d-4cff-a545-f5470d04e4b1-ovnkube-config\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.908396 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/35840520-7eec-4a39-8370-7bd619fcf74b-cnibin\") pod \"multus-additional-cni-plugins-m2v9k\" (UID: \"35840520-7eec-4a39-8370-7bd619fcf74b\") " pod="openshift-multus/multus-additional-cni-plugins-m2v9k" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.908419 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/35840520-7eec-4a39-8370-7bd619fcf74b-cni-binary-copy\") pod \"multus-additional-cni-plugins-m2v9k\" (UID: \"35840520-7eec-4a39-8370-7bd619fcf74b\") " pod="openshift-multus/multus-additional-cni-plugins-m2v9k" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.908450 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dfb21daf-e0ba-4deb-9f8c-45645740ec01-host\") pod \"node-ca-4tdb8\" (UID: \"dfb21daf-e0ba-4deb-9f8c-45645740ec01\") " pod="openshift-image-registry/node-ca-4tdb8" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.908473 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-log-socket\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.908492 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtd2r\" (UniqueName: \"kubernetes.io/projected/35840520-7eec-4a39-8370-7bd619fcf74b-kube-api-access-qtd2r\") pod \"multus-additional-cni-plugins-m2v9k\" (UID: \"35840520-7eec-4a39-8370-7bd619fcf74b\") " pod="openshift-multus/multus-additional-cni-plugins-m2v9k" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.908518 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2str8\" (UniqueName: \"kubernetes.io/projected/dfb21daf-e0ba-4deb-9f8c-45645740ec01-kube-api-access-2str8\") pod \"node-ca-4tdb8\" (UID: \"dfb21daf-e0ba-4deb-9f8c-45645740ec01\") " pod="openshift-image-registry/node-ca-4tdb8" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.908541 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d146eaed-a126-49d9-9dc2-3cebf5ecd5b9-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-jxk5p\" (UID: \"d146eaed-a126-49d9-9dc2-3cebf5ecd5b9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jxk5p" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.908559 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-cnibin\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.908585 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.908604 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/be66d38d-792d-4cff-a545-f5470d04e4b1-env-overrides\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.908669 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/be66d38d-792d-4cff-a545-f5470d04e4b1-ovnkube-script-lib\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.908698 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-multus-cni-dir\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.908722 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a360a173-34a8-483e-8e75-c23a59b15b83-cni-binary-copy\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.908741 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-multus-socket-dir-parent\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.908761 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-host-var-lib-kubelet\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.908990 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-etc-kubernetes\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.909037 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.909056 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-systemd-units\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.909071 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-node-log\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.909087 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-host-var-lib-cni-bin\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.909108 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.909129 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/42134f25-23a2-4498-8506-81215398022e-tmp-dir\") pod \"node-resolver-hthqk\" (UID: \"42134f25-23a2-4498-8506-81215398022e\") " pod="openshift-dns/node-resolver-hthqk" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.909149 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qv2v\" (UniqueName: \"kubernetes.io/projected/42134f25-23a2-4498-8506-81215398022e-kube-api-access-7qv2v\") pod \"node-resolver-hthqk\" (UID: \"42134f25-23a2-4498-8506-81215398022e\") " pod="openshift-dns/node-resolver-hthqk" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.909186 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-host-kubelet\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.909202 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-host-run-netns\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.909220 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-host-run-multus-certs\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.909237 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/35840520-7eec-4a39-8370-7bd619fcf74b-system-cni-dir\") pod \"multus-additional-cni-plugins-m2v9k\" (UID: \"35840520-7eec-4a39-8370-7bd619fcf74b\") " pod="openshift-multus/multus-additional-cni-plugins-m2v9k" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.909255 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.909275 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.909292 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/47a33b1f-9d8a-4a87-9d5b-15c2b36959df-metrics-certs\") pod \"network-metrics-daemon-6qncx\" (UID: \"47a33b1f-9d8a-4a87-9d5b-15c2b36959df\") " pod="openshift-multus/network-metrics-daemon-6qncx" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.909310 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d146eaed-a126-49d9-9dc2-3cebf5ecd5b9-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-jxk5p\" (UID: \"d146eaed-a126-49d9-9dc2-3cebf5ecd5b9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jxk5p" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.909333 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.909348 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-system-cni-dir\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.909372 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/be66d38d-792d-4cff-a545-f5470d04e4b1-ovn-node-metrics-cert\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.909390 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.909407 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-os-release\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.909422 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-hostroot\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.909438 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzznh\" (UniqueName: \"kubernetes.io/projected/a360a173-34a8-483e-8e75-c23a59b15b83-kube-api-access-rzznh\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:39 crc kubenswrapper[5099]: E0122 14:15:39.910153 5099 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 14:15:39 crc kubenswrapper[5099]: E0122 14:15:39.910247 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 14:15:40.410227342 +0000 UTC m=+98.117977579 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 14:15:39 crc kubenswrapper[5099]: E0122 14:15:39.910337 5099 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 14:15:39 crc kubenswrapper[5099]: E0122 14:15:39.910424 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 14:15:40.410403547 +0000 UTC m=+98.118153784 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.911271 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.911482 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.911655 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.911706 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.912797 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.913016 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.914215 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.914880 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: E0122 14:15:39.915605 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 14:15:39 crc kubenswrapper[5099]: E0122 14:15:39.915642 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 14:15:39 crc kubenswrapper[5099]: E0122 14:15:39.915656 5099 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.917961 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.918036 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.918030 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.919807 5099 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.919877 5099 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.919929 5099 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.919949 5099 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.919962 5099 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.919978 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.919992 5099 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920004 5099 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920016 5099 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920029 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920041 5099 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920055 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920068 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920082 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: E0122 14:15:39.920113 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-22 14:15:40.42008763 +0000 UTC m=+98.127837867 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920129 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920143 5099 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920212 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920228 5099 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920239 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920250 5099 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920260 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920270 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920280 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920292 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920301 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920311 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920320 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920329 5099 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920339 5099 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920348 5099 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920358 5099 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920366 5099 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920377 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920386 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920395 5099 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920406 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920415 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920424 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920434 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920444 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920457 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920471 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920484 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920496 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920506 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920516 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920524 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920533 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920542 5099 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920550 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920561 5099 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920573 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920582 5099 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920594 5099 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920607 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920644 5099 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920655 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920664 5099 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920673 5099 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920682 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920691 5099 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920699 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920708 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920718 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920726 5099 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920735 5099 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920746 5099 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920754 5099 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920762 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920771 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920780 5099 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920788 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920797 5099 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920806 5099 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920816 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920824 5099 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920833 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920843 5099 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920851 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920861 5099 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920871 5099 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920880 5099 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920890 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920899 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920908 5099 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920917 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920928 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920938 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920946 5099 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920958 5099 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920966 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920975 5099 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920984 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.920993 5099 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921002 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921011 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921020 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921030 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921040 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921049 5099 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921057 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921066 5099 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921074 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921084 5099 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921092 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921101 5099 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921110 5099 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921119 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921128 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921137 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921146 5099 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921156 5099 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921178 5099 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921187 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921197 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921205 5099 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921215 5099 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921224 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921234 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921244 5099 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921253 5099 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921261 5099 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921270 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921279 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921287 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921296 5099 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921305 5099 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921314 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921322 5099 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921330 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921340 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921349 5099 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921357 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921366 5099 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921374 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921383 5099 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921391 5099 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921400 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921408 5099 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921416 5099 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921425 5099 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921434 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921442 5099 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921454 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921465 5099 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921475 5099 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921486 5099 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921496 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921508 5099 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921519 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921529 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921537 5099 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921547 5099 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921556 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921565 5099 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921574 5099 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921583 5099 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921594 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921603 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921611 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921619 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921627 5099 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921635 5099 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921643 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921652 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921661 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921669 5099 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.921677 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.923139 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.924768 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.926103 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.926513 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.927114 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.927139 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.927548 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.927589 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.927761 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.927483 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.928024 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.928094 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.928307 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.928327 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.928452 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.928584 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.928845 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.928981 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.929189 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.929319 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.929601 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.930568 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.930992 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.931395 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.931432 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.931783 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.931925 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.930817 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.933145 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.933749 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.933985 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.934045 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.938881 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.939050 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: E0122 14:15:39.939211 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 14:15:39 crc kubenswrapper[5099]: E0122 14:15:39.939235 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 14:15:39 crc kubenswrapper[5099]: E0122 14:15:39.939248 5099 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:15:39 crc kubenswrapper[5099]: E0122 14:15:39.939317 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-22 14:15:40.439296962 +0000 UTC m=+98.147047199 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.941619 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.941830 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.942623 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.945859 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.947879 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.948543 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.948831 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.948971 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.949284 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.949399 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.949606 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.949670 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.950052 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.951064 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.949690 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-88wst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4620190f-fea2-4e88-8a94-8e1bd1e1db12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s9qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s9qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:15:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-88wst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.955218 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.962847 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-ddjv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a360a173-34a8-483e-8e75-c23a59b15b83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzznh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:15:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ddjv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.965197 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.968367 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.971274 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4tdb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfb21daf-e0ba-4deb-9f8c-45645740ec01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2str8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:15:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4tdb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.979644 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jxk5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d146eaed-a126-49d9-9dc2-3cebf5ecd5b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ml64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ml64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:15:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-jxk5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:39 crc kubenswrapper[5099]: I0122 14:15:39.980064 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.006543 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.006597 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.006609 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.006631 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.006644 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:40Z","lastTransitionTime":"2026-01-22T14:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.007550 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a13c49f-29c4-4968-b25e-323aa46b5294\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://35a73dbae0e4eaf6c2464e9037ae9074afb792e27c83f6a5cd93693b7d2531fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://79b20be9073348beaa847d2bbae7c58afb7cc229c4848dc7f609292a8d5f3ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1842e08221440a926087d33f9e5707e2c8f41cc90a17bc232a4a501a592d022a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://077cefa4a49425bbb45c01346f8399426f94060bc6a1a68c577cad91459299c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9132cc0eb41136054564502e0c129a302572c8d7658acfdcab4362ae222e0e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f88404a827bc7cfe0c884579fd832677e18d8f7599b980b81718d7f623c49879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f88404a827bc7cfe0c884579fd832677e18d8f7599b980b81718d7f623c49879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T14:14:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T14:14:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://909187eb0673067de5d3de63ca81410678dc81af9a1907ae01dfdc426e765e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://909187eb0673067de5d3de63ca81410678dc81af9a1907ae01dfdc426e765e6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T14:14:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T14:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://fea88bccab57f098d19c9924811d811f906ba97614fa1a739e94297be1920318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea88bccab57f098d19c9924811d811f906ba97614fa1a739e94297be1920318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T14:14:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T14:14:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:14:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.026082 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.027830 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-multus-conf-dir\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.027885 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-host-run-k8s-cni-cncf-io\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.027907 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/35840520-7eec-4a39-8370-7bd619fcf74b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-m2v9k\" (UID: \"35840520-7eec-4a39-8370-7bd619fcf74b\") " pod="openshift-multus/multus-additional-cni-plugins-m2v9k" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.027932 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-host-cni-bin\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.027950 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d146eaed-a126-49d9-9dc2-3cebf5ecd5b9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-jxk5p\" (UID: \"d146eaed-a126-49d9-9dc2-3cebf5ecd5b9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jxk5p" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.027968 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.027983 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-host-slash\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.028002 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-run-systemd\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.028019 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-var-lib-openvswitch\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.028039 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-etc-openvswitch\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.028059 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.028079 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-run-openvswitch\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.028094 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-run-ovn\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.028109 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2ml64\" (UniqueName: \"kubernetes.io/projected/d146eaed-a126-49d9-9dc2-3cebf5ecd5b9-kube-api-access-2ml64\") pod \"ovnkube-control-plane-57b78d8988-jxk5p\" (UID: \"d146eaed-a126-49d9-9dc2-3cebf5ecd5b9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jxk5p" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.028127 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-host-var-lib-cni-multus\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.028146 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/35840520-7eec-4a39-8370-7bd619fcf74b-os-release\") pod \"multus-additional-cni-plugins-m2v9k\" (UID: \"35840520-7eec-4a39-8370-7bd619fcf74b\") " pod="openshift-multus/multus-additional-cni-plugins-m2v9k" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.028179 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a360a173-34a8-483e-8e75-c23a59b15b83-multus-daemon-config\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.028196 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/35840520-7eec-4a39-8370-7bd619fcf74b-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-m2v9k\" (UID: \"35840520-7eec-4a39-8370-7bd619fcf74b\") " pod="openshift-multus/multus-additional-cni-plugins-m2v9k" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.028272 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.028322 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-run-openvswitch\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.028326 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-var-lib-openvswitch\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.028774 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-host-var-lib-cni-multus\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.029004 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-run-systemd\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.029243 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-host-run-k8s-cni-cncf-io\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.029310 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-multus-conf-dir\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.029353 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.029432 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-host-slash\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.029502 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4620190f-fea2-4e88-8a94-8e1bd1e1db12-proxy-tls\") pod \"machine-config-daemon-88wst\" (UID: \"4620190f-fea2-4e88-8a94-8e1bd1e1db12\") " pod="openshift-machine-config-operator/machine-config-daemon-88wst" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.029613 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4620190f-fea2-4e88-8a94-8e1bd1e1db12-mcd-auth-proxy-config\") pod \"machine-config-daemon-88wst\" (UID: \"4620190f-fea2-4e88-8a94-8e1bd1e1db12\") " pod="openshift-machine-config-operator/machine-config-daemon-88wst" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.029633 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a360a173-34a8-483e-8e75-c23a59b15b83-multus-daemon-config\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.029648 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/be66d38d-792d-4cff-a545-f5470d04e4b1-ovnkube-config\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.029702 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/35840520-7eec-4a39-8370-7bd619fcf74b-cnibin\") pod \"multus-additional-cni-plugins-m2v9k\" (UID: \"35840520-7eec-4a39-8370-7bd619fcf74b\") " pod="openshift-multus/multus-additional-cni-plugins-m2v9k" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.029726 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/35840520-7eec-4a39-8370-7bd619fcf74b-cni-binary-copy\") pod \"multus-additional-cni-plugins-m2v9k\" (UID: \"35840520-7eec-4a39-8370-7bd619fcf74b\") " pod="openshift-multus/multus-additional-cni-plugins-m2v9k" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.029775 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dfb21daf-e0ba-4deb-9f8c-45645740ec01-host\") pod \"node-ca-4tdb8\" (UID: \"dfb21daf-e0ba-4deb-9f8c-45645740ec01\") " pod="openshift-image-registry/node-ca-4tdb8" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.029804 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-log-socket\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.029856 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qtd2r\" (UniqueName: \"kubernetes.io/projected/35840520-7eec-4a39-8370-7bd619fcf74b-kube-api-access-qtd2r\") pod \"multus-additional-cni-plugins-m2v9k\" (UID: \"35840520-7eec-4a39-8370-7bd619fcf74b\") " pod="openshift-multus/multus-additional-cni-plugins-m2v9k" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.029889 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2str8\" (UniqueName: \"kubernetes.io/projected/dfb21daf-e0ba-4deb-9f8c-45645740ec01-kube-api-access-2str8\") pod \"node-ca-4tdb8\" (UID: \"dfb21daf-e0ba-4deb-9f8c-45645740ec01\") " pod="openshift-image-registry/node-ca-4tdb8" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.029945 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d146eaed-a126-49d9-9dc2-3cebf5ecd5b9-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-jxk5p\" (UID: \"d146eaed-a126-49d9-9dc2-3cebf5ecd5b9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jxk5p" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.029972 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-cnibin\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.030026 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/be66d38d-792d-4cff-a545-f5470d04e4b1-env-overrides\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.030075 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-host-cni-bin\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.030119 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/be66d38d-792d-4cff-a545-f5470d04e4b1-ovnkube-script-lib\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.030149 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-multus-cni-dir\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.030191 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a360a173-34a8-483e-8e75-c23a59b15b83-cni-binary-copy\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.030217 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-multus-socket-dir-parent\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.030241 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-host-var-lib-kubelet\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.030262 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-etc-kubernetes\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.030289 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-systemd-units\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.030314 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-node-log\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.030335 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-host-var-lib-cni-bin\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.030374 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/42134f25-23a2-4498-8506-81215398022e-tmp-dir\") pod \"node-resolver-hthqk\" (UID: \"42134f25-23a2-4498-8506-81215398022e\") " pod="openshift-dns/node-resolver-hthqk" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.030396 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7qv2v\" (UniqueName: \"kubernetes.io/projected/42134f25-23a2-4498-8506-81215398022e-kube-api-access-7qv2v\") pod \"node-resolver-hthqk\" (UID: \"42134f25-23a2-4498-8506-81215398022e\") " pod="openshift-dns/node-resolver-hthqk" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.030419 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-host-kubelet\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.030438 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-host-run-netns\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.030463 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-host-run-multus-certs\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.030482 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/35840520-7eec-4a39-8370-7bd619fcf74b-system-cni-dir\") pod \"multus-additional-cni-plugins-m2v9k\" (UID: \"35840520-7eec-4a39-8370-7bd619fcf74b\") " pod="openshift-multus/multus-additional-cni-plugins-m2v9k" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.030513 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/47a33b1f-9d8a-4a87-9d5b-15c2b36959df-metrics-certs\") pod \"network-metrics-daemon-6qncx\" (UID: \"47a33b1f-9d8a-4a87-9d5b-15c2b36959df\") " pod="openshift-multus/network-metrics-daemon-6qncx" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.030532 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d146eaed-a126-49d9-9dc2-3cebf5ecd5b9-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-jxk5p\" (UID: \"d146eaed-a126-49d9-9dc2-3cebf5ecd5b9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jxk5p" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.030560 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-system-cni-dir\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.030584 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/be66d38d-792d-4cff-a545-f5470d04e4b1-ovn-node-metrics-cert\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.030604 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-os-release\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.030621 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-hostroot\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.028344 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-run-ovn\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.030941 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/35840520-7eec-4a39-8370-7bd619fcf74b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-m2v9k\" (UID: \"35840520-7eec-4a39-8370-7bd619fcf74b\") " pod="openshift-multus/multus-additional-cni-plugins-m2v9k" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.031372 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dfb21daf-e0ba-4deb-9f8c-45645740ec01-host\") pod \"node-ca-4tdb8\" (UID: \"dfb21daf-e0ba-4deb-9f8c-45645740ec01\") " pod="openshift-image-registry/node-ca-4tdb8" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.031501 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/35840520-7eec-4a39-8370-7bd619fcf74b-os-release\") pod \"multus-additional-cni-plugins-m2v9k\" (UID: \"35840520-7eec-4a39-8370-7bd619fcf74b\") " pod="openshift-multus/multus-additional-cni-plugins-m2v9k" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.031593 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-hostroot\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.032319 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rzznh\" (UniqueName: \"kubernetes.io/projected/a360a173-34a8-483e-8e75-c23a59b15b83-kube-api-access-rzznh\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.032455 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.032488 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-host-cni-netd\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.032486 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/35840520-7eec-4a39-8370-7bd619fcf74b-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-m2v9k\" (UID: \"35840520-7eec-4a39-8370-7bd619fcf74b\") " pod="openshift-multus/multus-additional-cni-plugins-m2v9k" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.032554 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/42134f25-23a2-4498-8506-81215398022e-hosts-file\") pod \"node-resolver-hthqk\" (UID: \"42134f25-23a2-4498-8506-81215398022e\") " pod="openshift-dns/node-resolver-hthqk" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.032581 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4620190f-fea2-4e88-8a94-8e1bd1e1db12-rootfs\") pod \"machine-config-daemon-88wst\" (UID: \"4620190f-fea2-4e88-8a94-8e1bd1e1db12\") " pod="openshift-machine-config-operator/machine-config-daemon-88wst" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.032619 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8s9qj\" (UniqueName: \"kubernetes.io/projected/4620190f-fea2-4e88-8a94-8e1bd1e1db12-kube-api-access-8s9qj\") pod \"machine-config-daemon-88wst\" (UID: \"4620190f-fea2-4e88-8a94-8e1bd1e1db12\") " pod="openshift-machine-config-operator/machine-config-daemon-88wst" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.032636 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wxkc6\" (UniqueName: \"kubernetes.io/projected/47a33b1f-9d8a-4a87-9d5b-15c2b36959df-kube-api-access-wxkc6\") pod \"network-metrics-daemon-6qncx\" (UID: \"47a33b1f-9d8a-4a87-9d5b-15c2b36959df\") " pod="openshift-multus/network-metrics-daemon-6qncx" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.032658 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-host-run-netns\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.032691 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-host-run-ovn-kubernetes\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.032708 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t448v\" (UniqueName: \"kubernetes.io/projected/be66d38d-792d-4cff-a545-f5470d04e4b1-kube-api-access-t448v\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.032729 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/35840520-7eec-4a39-8370-7bd619fcf74b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-m2v9k\" (UID: \"35840520-7eec-4a39-8370-7bd619fcf74b\") " pod="openshift-multus/multus-additional-cni-plugins-m2v9k" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.032765 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/dfb21daf-e0ba-4deb-9f8c-45645740ec01-serviceca\") pod \"node-ca-4tdb8\" (UID: \"dfb21daf-e0ba-4deb-9f8c-45645740ec01\") " pod="openshift-image-registry/node-ca-4tdb8" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.032805 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/be66d38d-792d-4cff-a545-f5470d04e4b1-env-overrides\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.032902 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.032940 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-host-run-multus-certs\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:40 crc kubenswrapper[5099]: E0122 14:15:40.032996 5099 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 14:15:40 crc kubenswrapper[5099]: E0122 14:15:40.033047 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47a33b1f-9d8a-4a87-9d5b-15c2b36959df-metrics-certs podName:47a33b1f-9d8a-4a87-9d5b-15c2b36959df nodeName:}" failed. No retries permitted until 2026-01-22 14:15:40.533032767 +0000 UTC m=+98.240783004 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/47a33b1f-9d8a-4a87-9d5b-15c2b36959df-metrics-certs") pod "network-metrics-daemon-6qncx" (UID: "47a33b1f-9d8a-4a87-9d5b-15c2b36959df") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.033128 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-host-cni-netd\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.033188 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-multus-socket-dir-parent\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.033224 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-system-cni-dir\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.033480 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-host-kubelet\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.033517 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/42134f25-23a2-4498-8506-81215398022e-tmp-dir\") pod \"node-resolver-hthqk\" (UID: \"42134f25-23a2-4498-8506-81215398022e\") " pod="openshift-dns/node-resolver-hthqk" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.033603 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-host-run-netns\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.033631 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/35840520-7eec-4a39-8370-7bd619fcf74b-system-cni-dir\") pod \"multus-additional-cni-plugins-m2v9k\" (UID: \"35840520-7eec-4a39-8370-7bd619fcf74b\") " pod="openshift-multus/multus-additional-cni-plugins-m2v9k" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.033693 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/42134f25-23a2-4498-8506-81215398022e-hosts-file\") pod \"node-resolver-hthqk\" (UID: \"42134f25-23a2-4498-8506-81215398022e\") " pod="openshift-dns/node-resolver-hthqk" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.033922 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/35840520-7eec-4a39-8370-7bd619fcf74b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-m2v9k\" (UID: \"35840520-7eec-4a39-8370-7bd619fcf74b\") " pod="openshift-multus/multus-additional-cni-plugins-m2v9k" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.034295 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-os-release\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.034346 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-log-socket\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.034386 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-host-var-lib-cni-bin\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.034415 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-systemd-units\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.034450 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-host-run-ovn-kubernetes\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.034480 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d146eaed-a126-49d9-9dc2-3cebf5ecd5b9-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-jxk5p\" (UID: \"d146eaed-a126-49d9-9dc2-3cebf5ecd5b9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jxk5p" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.034506 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-multus-cni-dir\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.034533 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/35840520-7eec-4a39-8370-7bd619fcf74b-cnibin\") pod \"multus-additional-cni-plugins-m2v9k\" (UID: \"35840520-7eec-4a39-8370-7bd619fcf74b\") " pod="openshift-multus/multus-additional-cni-plugins-m2v9k" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.034539 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-host-run-netns\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.034570 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-cnibin\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.034954 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a360a173-34a8-483e-8e75-c23a59b15b83-cni-binary-copy\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.034997 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/dfb21daf-e0ba-4deb-9f8c-45645740ec01-serviceca\") pod \"node-ca-4tdb8\" (UID: \"dfb21daf-e0ba-4deb-9f8c-45645740ec01\") " pod="openshift-image-registry/node-ca-4tdb8" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.035002 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-etc-kubernetes\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.035026 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-node-log\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.035066 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be66d38d-792d-4cff-a545-f5470d04e4b1-etc-openvswitch\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.035108 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4620190f-fea2-4e88-8a94-8e1bd1e1db12-rootfs\") pod \"machine-config-daemon-88wst\" (UID: \"4620190f-fea2-4e88-8a94-8e1bd1e1db12\") " pod="openshift-machine-config-operator/machine-config-daemon-88wst" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.035358 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/35840520-7eec-4a39-8370-7bd619fcf74b-cni-binary-copy\") pod \"multus-additional-cni-plugins-m2v9k\" (UID: \"35840520-7eec-4a39-8370-7bd619fcf74b\") " pod="openshift-multus/multus-additional-cni-plugins-m2v9k" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.035537 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d146eaed-a126-49d9-9dc2-3cebf5ecd5b9-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-jxk5p\" (UID: \"d146eaed-a126-49d9-9dc2-3cebf5ecd5b9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jxk5p" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.035589 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a360a173-34a8-483e-8e75-c23a59b15b83-host-var-lib-kubelet\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.035698 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4620190f-fea2-4e88-8a94-8e1bd1e1db12-mcd-auth-proxy-config\") pod \"machine-config-daemon-88wst\" (UID: \"4620190f-fea2-4e88-8a94-8e1bd1e1db12\") " pod="openshift-machine-config-operator/machine-config-daemon-88wst" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.035731 5099 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.035751 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.035766 5099 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.035778 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.035799 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.035813 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.035826 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.035837 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.035848 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.035863 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.035876 5099 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.035902 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.035915 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.035929 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.035943 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.035955 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.035967 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.035982 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.035995 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.036008 5099 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.036020 5099 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.036031 5099 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.036043 5099 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.036054 5099 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.036065 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.036078 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.036090 5099 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.036105 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.036117 5099 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.036130 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.036144 5099 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.036157 5099 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.036186 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.036201 5099 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.036215 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.036228 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.036240 5099 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.036253 5099 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.036266 5099 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.036280 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.036295 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.036307 5099 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.036318 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.036330 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.036345 5099 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.036359 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.036370 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.036382 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.036393 5099 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.036406 5099 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.036418 5099 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.036430 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.036441 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.036848 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/be66d38d-792d-4cff-a545-f5470d04e4b1-ovnkube-config\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.039010 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/be66d38d-792d-4cff-a545-f5470d04e4b1-ovnkube-script-lib\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.042315 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4620190f-fea2-4e88-8a94-8e1bd1e1db12-proxy-tls\") pod \"machine-config-daemon-88wst\" (UID: \"4620190f-fea2-4e88-8a94-8e1bd1e1db12\") " pod="openshift-machine-config-operator/machine-config-daemon-88wst" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.044110 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.044806 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d146eaed-a126-49d9-9dc2-3cebf5ecd5b9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-jxk5p\" (UID: \"d146eaed-a126-49d9-9dc2-3cebf5ecd5b9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jxk5p" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.048876 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/be66d38d-792d-4cff-a545-f5470d04e4b1-ovn-node-metrics-cert\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.050618 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ml64\" (UniqueName: \"kubernetes.io/projected/d146eaed-a126-49d9-9dc2-3cebf5ecd5b9-kube-api-access-2ml64\") pod \"ovnkube-control-plane-57b78d8988-jxk5p\" (UID: \"d146eaed-a126-49d9-9dc2-3cebf5ecd5b9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jxk5p" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.053627 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxkc6\" (UniqueName: \"kubernetes.io/projected/47a33b1f-9d8a-4a87-9d5b-15c2b36959df-kube-api-access-wxkc6\") pod \"network-metrics-daemon-6qncx\" (UID: \"47a33b1f-9d8a-4a87-9d5b-15c2b36959df\") " pod="openshift-multus/network-metrics-daemon-6qncx" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.053628 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzznh\" (UniqueName: \"kubernetes.io/projected/a360a173-34a8-483e-8e75-c23a59b15b83-kube-api-access-rzznh\") pod \"multus-ddjv2\" (UID: \"a360a173-34a8-483e-8e75-c23a59b15b83\") " pod="openshift-multus/multus-ddjv2" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.056277 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.056732 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t448v\" (UniqueName: \"kubernetes.io/projected/be66d38d-792d-4cff-a545-f5470d04e4b1-kube-api-access-t448v\") pod \"ovnkube-node-rwglj\" (UID: \"be66d38d-792d-4cff-a545-f5470d04e4b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.058343 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s9qj\" (UniqueName: \"kubernetes.io/projected/4620190f-fea2-4e88-8a94-8e1bd1e1db12-kube-api-access-8s9qj\") pod \"machine-config-daemon-88wst\" (UID: \"4620190f-fea2-4e88-8a94-8e1bd1e1db12\") " pod="openshift-machine-config-operator/machine-config-daemon-88wst" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.059513 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2str8\" (UniqueName: \"kubernetes.io/projected/dfb21daf-e0ba-4deb-9f8c-45645740ec01-kube-api-access-2str8\") pod \"node-ca-4tdb8\" (UID: \"dfb21daf-e0ba-4deb-9f8c-45645740ec01\") " pod="openshift-image-registry/node-ca-4tdb8" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.061606 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qv2v\" (UniqueName: \"kubernetes.io/projected/42134f25-23a2-4498-8506-81215398022e-kube-api-access-7qv2v\") pod \"node-resolver-hthqk\" (UID: \"42134f25-23a2-4498-8506-81215398022e\") " pod="openshift-dns/node-resolver-hthqk" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.068426 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtd2r\" (UniqueName: \"kubernetes.io/projected/35840520-7eec-4a39-8370-7bd619fcf74b-kube-api-access-qtd2r\") pod \"multus-additional-cni-plugins-m2v9k\" (UID: \"35840520-7eec-4a39-8370-7bd619fcf74b\") " pod="openshift-multus/multus-additional-cni-plugins-m2v9k" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.072000 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.075910 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be66d38d-792d-4cff-a545-f5470d04e4b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t448v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t448v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t448v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t448v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t448v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t448v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t448v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t448v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t448v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:15:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rwglj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.079824 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 14:15:40 crc kubenswrapper[5099]: W0122 14:15:40.085520 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34177974_8d82_49d2_a763_391d0df3bbd8.slice/crio-8fc04f5e72ab6251268d2990c40e717c661537178def8e668e5ba4f67b1757c1 WatchSource:0}: Error finding container 8fc04f5e72ab6251268d2990c40e717c661537178def8e668e5ba4f67b1757c1: Status 404 returned error can't find the container with id 8fc04f5e72ab6251268d2990c40e717c661537178def8e668e5ba4f67b1757c1 Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.087924 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9980cda-075f-403b-8cb8-4dee7f3846fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://316c373a4bdd22b1ed9faf03ed0a93cbbdb81ab6e410df5243898da1c7d1be3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1197f96522c23c825f1d22265693ca0f6cdb3422caddc05dc5949c463e83bc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1197f96522c23c825f1d22265693ca0f6cdb3422caddc05dc5949c463e83bc10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T14:14:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T14:14:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:14:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.089365 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 14:15:40 crc kubenswrapper[5099]: W0122 14:15:40.089533 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc4541ce_7789_4670_bc75_5c2868e52ce0.slice/crio-7077a1d13f81e8e8ae1af8fbfc195dc2dac360857de1abe6ca68a715221b3ded WatchSource:0}: Error finding container 7077a1d13f81e8e8ae1af8fbfc195dc2dac360857de1abe6ca68a715221b3ded: Status 404 returned error can't find the container with id 7077a1d13f81e8e8ae1af8fbfc195dc2dac360857de1abe6ca68a715221b3ded Jan 22 14:15:40 crc kubenswrapper[5099]: W0122 14:15:40.100614 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod428b39f5_eb1c_4f65_b7a4_eeb6e84860cc.slice/crio-1b52ebc4a0de6a5db384132c54269226a27b0f6f6e18bf7ae5206ff366ed1bbb WatchSource:0}: Error finding container 1b52ebc4a0de6a5db384132c54269226a27b0f6f6e18bf7ae5206ff366ed1bbb: Status 404 returned error can't find the container with id 1b52ebc4a0de6a5db384132c54269226a27b0f6f6e18bf7ae5206ff366ed1bbb Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.101695 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92d8cd7a-cfa0-40a8-baa8-965843945f8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://95db530cd8291c29532a646c4d1cb47fe229d5c70e62f41fd31bfcae36643391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:14:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://acd69d42784e648b84b99c1cced0501b3fc34e0e8d0fa85436cc27014ab88d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:14:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6042547e3c27102c016e9cc5bf795c6f38820018f8ebf572bb19c1802c91f35e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:14:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c6a3a228fdd0a16ad075639d991808c4c1b385622b2989fe2232ea1e51504a96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:14:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:14:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.107505 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hthqk" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.109515 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.109561 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.109574 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.109592 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.109604 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:40Z","lastTransitionTime":"2026-01-22T14:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.112590 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-hthqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42134f25-23a2-4498-8506-81215398022e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qv2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:15:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hthqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.114934 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-88wst" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.128555 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-ddjv2" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.130322 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m2v9k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35840520-7eec-4a39-8370-7bd619fcf74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtd2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtd2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtd2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtd2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtd2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtd2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtd2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:15:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m2v9k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.137077 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:40 crc kubenswrapper[5099]: W0122 14:15:40.145587 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda360a173_34a8_483e_8e75_c23a59b15b83.slice/crio-d795688eea0cc5623212ef1fbf1d9333318bd6da79fecd38d3756ae600f0cf55 WatchSource:0}: Error finding container d795688eea0cc5623212ef1fbf1d9333318bd6da79fecd38d3756ae600f0cf55: Status 404 returned error can't find the container with id d795688eea0cc5623212ef1fbf1d9333318bd6da79fecd38d3756ae600f0cf55 Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.145657 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-m2v9k" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.145366 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66a6f8fc-92aa-40d8-b0ac-f8b0034c5e36\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://6adc2225720223e57146af518d38a71d01a5a7d3c93cb4c4b55e20bf0cebbff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7376664c839f44962e7ba0e53f1895d1e99866d768aa568d31816e23f3959b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://57d66dcb61882a772119bb25e7d1b4b49ea5f91bdbc8a2daf7f3301b0eeb4d3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://563099c8fdc6fc3a36ff525a462a2d830742426eacbf17781d1a891dab9018d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://563099c8fdc6fc3a36ff525a462a2d830742426eacbf17781d1a891dab9018d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T14:14:59Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW0122 14:14:58.395453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 14:14:58.395598 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0122 14:14:58.396541 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367279653/tls.crt::/tmp/serving-cert-1367279653/tls.key\\\\\\\"\\\\nI0122 14:14:59.119885 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 14:14:59.122136 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 14:14:59.122156 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 14:14:59.122206 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 14:14:59.122213 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 14:14:59.125959 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0122 14:14:59.125976 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 14:14:59.125991 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 14:14:59.125997 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 14:14:59.126004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 14:14:59.126008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 14:14:59.126013 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 14:14:59.126017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 14:14:59.128448 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T14:14:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d9b88a4b7534e1edf736c237a20e009b50a64a78928e2fd583b9ba86400b6452\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://516978eba8fe79e87c1d0bcd72dca39f8671f5f446bc1ef52787527b28e698dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://516978eba8fe79e87c1d0bcd72dca39f8671f5f446bc1ef52787527b28e698dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T14:14:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T14:14:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:14:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.153405 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-4tdb8" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.160398 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jxk5p" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.160577 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.174860 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.180495 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ddjv2" event={"ID":"a360a173-34a8-483e-8e75-c23a59b15b83","Type":"ContainerStarted","Data":"d795688eea0cc5623212ef1fbf1d9333318bd6da79fecd38d3756ae600f0cf55"} Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.184653 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hthqk" event={"ID":"42134f25-23a2-4498-8506-81215398022e","Type":"ContainerStarted","Data":"fa4e40d59803c696306d347df6f8d4f59b02239f6c17b094ea5ab7eaf14cde25"} Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.186130 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6qncx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47a33b1f-9d8a-4a87-9d5b-15c2b36959df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:15:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6qncx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.188093 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"8fc04f5e72ab6251268d2990c40e717c661537178def8e668e5ba4f67b1757c1"} Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.196772 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d04fcc9-3b89-4fdb-a23e-b91b407f4045\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4af6e790f0b67146252ea7d1dc240b875299d9697f1271a8da4bba5bbdbd3eb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://84c4d8685a7f253f25ece1f33240855b9460afdd4def7b3037033ef3bcbf4fa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c8458f5168dc84469986054180e4aa5b1dd5fd8b8fcdc76850d8d82d900f3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://30f23a7da44acb5a6c4e3efee3ea14b7fb95d8d869cfce920d96254e2845d374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30f23a7da44acb5a6c4e3efee3ea14b7fb95d8d869cfce920d96254e2845d374\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T14:14:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T14:14:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:14:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.198465 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.205616 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"d24c1ca52fc7eb4263fd15ac005d5a88732f554fa7e633ae037611088c865751"} Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.206072 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.206930 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.207234 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-88wst" event={"ID":"4620190f-fea2-4e88-8a94-8e1bd1e1db12","Type":"ContainerStarted","Data":"4ab4e1187531ee2439913441c4fde04debb3cd368d31961a4df104a58392aef7"} Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.208637 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"1b52ebc4a0de6a5db384132c54269226a27b0f6f6e18bf7ae5206ff366ed1bbb"} Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.210836 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"7077a1d13f81e8e8ae1af8fbfc195dc2dac360857de1abe6ca68a715221b3ded"} Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.211439 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.211485 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.211498 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.211514 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:40 crc kubenswrapper[5099]: W0122 14:15:40.211603 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddfb21daf_e0ba_4deb_9f8c_45645740ec01.slice/crio-0a9be884767b3b493144abdf219cecc3d834c5e3429f4ad8271628d78703791c WatchSource:0}: Error finding container 0a9be884767b3b493144abdf219cecc3d834c5e3429f4ad8271628d78703791c: Status 404 returned error can't find the container with id 0a9be884767b3b493144abdf219cecc3d834c5e3429f4ad8271628d78703791c Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.211526 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:40Z","lastTransitionTime":"2026-01-22T14:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.218114 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92d8cd7a-cfa0-40a8-baa8-965843945f8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://95db530cd8291c29532a646c4d1cb47fe229d5c70e62f41fd31bfcae36643391\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:14:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://acd69d42784e648b84b99c1cced0501b3fc34e0e8d0fa85436cc27014ab88d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:14:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6042547e3c27102c016e9cc5bf795c6f38820018f8ebf572bb19c1802c91f35e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:14:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c6a3a228fdd0a16ad075639d991808c4c1b385622b2989fe2232ea1e51504a96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:14:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:14:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.228098 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-hthqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42134f25-23a2-4498-8506-81215398022e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qv2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:15:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hthqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.244141 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m2v9k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35840520-7eec-4a39-8370-7bd619fcf74b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtd2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtd2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtd2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtd2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtd2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtd2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtd2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:15:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m2v9k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.257513 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66a6f8fc-92aa-40d8-b0ac-f8b0034c5e36\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://6adc2225720223e57146af518d38a71d01a5a7d3c93cb4c4b55e20bf0cebbff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7376664c839f44962e7ba0e53f1895d1e99866d768aa568d31816e23f3959b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://57d66dcb61882a772119bb25e7d1b4b49ea5f91bdbc8a2daf7f3301b0eeb4d3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d24c1ca52fc7eb4263fd15ac005d5a88732f554fa7e633ae037611088c865751\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://563099c8fdc6fc3a36ff525a462a2d830742426eacbf17781d1a891dab9018d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T14:14:59Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW0122 14:14:58.395453 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 14:14:58.395598 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0122 14:14:58.396541 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367279653/tls.crt::/tmp/serving-cert-1367279653/tls.key\\\\\\\"\\\\nI0122 14:14:59.119885 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 14:14:59.122136 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 14:14:59.122156 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 14:14:59.122206 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 14:14:59.122213 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 14:14:59.125959 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0122 14:14:59.125976 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0122 14:14:59.125991 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 14:14:59.125997 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 14:14:59.126004 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 14:14:59.126008 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 14:14:59.126013 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 14:14:59.126017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0122 14:14:59.128448 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T14:14:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:15:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d9b88a4b7534e1edf736c237a20e009b50a64a78928e2fd583b9ba86400b6452\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://516978eba8fe79e87c1d0bcd72dca39f8671f5f446bc1ef52787527b28e698dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://516978eba8fe79e87c1d0bcd72dca39f8671f5f446bc1ef52787527b28e698dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T14:14:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T14:14:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:14:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.269321 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.279365 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.290310 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6qncx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47a33b1f-9d8a-4a87-9d5b-15c2b36959df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:15:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6qncx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.316209 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d04fcc9-3b89-4fdb-a23e-b91b407f4045\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4af6e790f0b67146252ea7d1dc240b875299d9697f1271a8da4bba5bbdbd3eb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://84c4d8685a7f253f25ece1f33240855b9460afdd4def7b3037033ef3bcbf4fa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c8458f5168dc84469986054180e4aa5b1dd5fd8b8fcdc76850d8d82d900f3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://30f23a7da44acb5a6c4e3efee3ea14b7fb95d8d869cfce920d96254e2845d374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30f23a7da44acb5a6c4e3efee3ea14b7fb95d8d869cfce920d96254e2845d374\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T14:14:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T14:14:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:14:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.318888 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.318932 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.318943 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.318957 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.318966 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:40Z","lastTransitionTime":"2026-01-22T14:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.356457 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.393979 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-88wst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4620190f-fea2-4e88-8a94-8e1bd1e1db12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s9qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s9qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:15:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-88wst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.421951 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.422003 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.422015 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.422034 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.422047 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:40Z","lastTransitionTime":"2026-01-22T14:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.435032 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-ddjv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a360a173-34a8-483e-8e75-c23a59b15b83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzznh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:15:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ddjv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.439680 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.439780 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.439814 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.439850 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.439880 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 14:15:40 crc kubenswrapper[5099]: E0122 14:15:40.440009 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 14:15:40 crc kubenswrapper[5099]: E0122 14:15:40.440033 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 14:15:40 crc kubenswrapper[5099]: E0122 14:15:40.440046 5099 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:15:40 crc kubenswrapper[5099]: E0122 14:15:40.440101 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-22 14:15:41.440083291 +0000 UTC m=+99.147833528 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:15:40 crc kubenswrapper[5099]: E0122 14:15:40.440478 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:15:41.440466522 +0000 UTC m=+99.148216769 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:15:40 crc kubenswrapper[5099]: E0122 14:15:40.440528 5099 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 14:15:40 crc kubenswrapper[5099]: E0122 14:15:40.440560 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 14:15:41.440553034 +0000 UTC m=+99.148303271 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 14:15:40 crc kubenswrapper[5099]: E0122 14:15:40.440621 5099 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 14:15:40 crc kubenswrapper[5099]: E0122 14:15:40.440649 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 14:15:41.440641716 +0000 UTC m=+99.148391953 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 14:15:40 crc kubenswrapper[5099]: E0122 14:15:40.440701 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 14:15:40 crc kubenswrapper[5099]: E0122 14:15:40.440712 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 14:15:40 crc kubenswrapper[5099]: E0122 14:15:40.440720 5099 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:15:40 crc kubenswrapper[5099]: E0122 14:15:40.440746 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-22 14:15:41.440737569 +0000 UTC m=+99.148487806 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.472804 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4tdb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfb21daf-e0ba-4deb-9f8c-45645740ec01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2str8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:15:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4tdb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.513786 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jxk5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d146eaed-a126-49d9-9dc2-3cebf5ecd5b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ml64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ml64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:15:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-jxk5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.524791 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.524840 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.524853 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.524871 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.524882 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:40Z","lastTransitionTime":"2026-01-22T14:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.540377 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/47a33b1f-9d8a-4a87-9d5b-15c2b36959df-metrics-certs\") pod \"network-metrics-daemon-6qncx\" (UID: \"47a33b1f-9d8a-4a87-9d5b-15c2b36959df\") " pod="openshift-multus/network-metrics-daemon-6qncx" Jan 22 14:15:40 crc kubenswrapper[5099]: E0122 14:15:40.540511 5099 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 14:15:40 crc kubenswrapper[5099]: E0122 14:15:40.540559 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47a33b1f-9d8a-4a87-9d5b-15c2b36959df-metrics-certs podName:47a33b1f-9d8a-4a87-9d5b-15c2b36959df nodeName:}" failed. No retries permitted until 2026-01-22 14:15:41.540543659 +0000 UTC m=+99.248293906 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/47a33b1f-9d8a-4a87-9d5b-15c2b36959df-metrics-certs") pod "network-metrics-daemon-6qncx" (UID: "47a33b1f-9d8a-4a87-9d5b-15c2b36959df") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.563776 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a13c49f-29c4-4968-b25e-323aa46b5294\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:14:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://35a73dbae0e4eaf6c2464e9037ae9074afb792e27c83f6a5cd93693b7d2531fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://79b20be9073348beaa847d2bbae7c58afb7cc229c4848dc7f609292a8d5f3ab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1842e08221440a926087d33f9e5707e2c8f41cc90a17bc232a4a501a592d022a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://077cefa4a49425bbb45c01346f8399426f94060bc6a1a68c577cad91459299c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9132cc0eb41136054564502e0c129a302572c8d7658acfdcab4362ae222e0e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:14:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f88404a827bc7cfe0c884579fd832677e18d8f7599b980b81718d7f623c49879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f88404a827bc7cfe0c884579fd832677e18d8f7599b980b81718d7f623c49879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T14:14:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T14:14:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://909187eb0673067de5d3de63ca81410678dc81af9a1907ae01dfdc426e765e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://909187eb0673067de5d3de63ca81410678dc81af9a1907ae01dfdc426e765e6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T14:14:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T14:14:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://fea88bccab57f098d19c9924811d811f906ba97614fa1a739e94297be1920318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea88bccab57f098d19c9924811d811f906ba97614fa1a739e94297be1920318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T14:14:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T14:14:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:14:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.597565 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.608323 5099 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.627111 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.627158 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.627191 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.627204 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.627213 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:40Z","lastTransitionTime":"2026-01-22T14:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.654976 5099 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.729816 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.729872 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.729885 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.729901 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.729913 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:40Z","lastTransitionTime":"2026-01-22T14:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.765234 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.766303 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.767346 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.769101 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.771375 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.773010 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.774489 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.775933 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.778122 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.779961 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.780943 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.783107 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.784042 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.785926 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.786559 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.787426 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.788890 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.790102 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.791539 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.792688 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.793674 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.795663 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.796875 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.798042 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.799664 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.800740 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.802155 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.803081 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.805470 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.806597 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.807742 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.809271 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.810488 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.811864 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.812923 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.814271 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.816490 5099 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.816662 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.820707 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.821915 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.823529 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.824621 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.825856 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.827143 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.828905 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.829526 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.830569 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.832268 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.832298 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.832306 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.832319 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.832328 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:40Z","lastTransitionTime":"2026-01-22T14:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.832379 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.833243 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.834467 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.835317 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.838281 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.839036 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.840449 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.843957 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.844698 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.845872 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.846664 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.933780 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.933835 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.933849 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.933865 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:40 crc kubenswrapper[5099]: I0122 14:15:40.933876 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:40Z","lastTransitionTime":"2026-01-22T14:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.039532 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.039899 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.039915 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.039934 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.039946 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:41Z","lastTransitionTime":"2026-01-22T14:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.143683 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.143745 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.143757 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.143773 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.143782 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:41Z","lastTransitionTime":"2026-01-22T14:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.215986 5099 generic.go:358] "Generic (PLEG): container finished" podID="be66d38d-792d-4cff-a545-f5470d04e4b1" containerID="e3e17d0ddd4eec0cafed36151a5232073406435ca20bd2f1670ac73fa8ca5e1b" exitCode=0 Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.216071 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" event={"ID":"be66d38d-792d-4cff-a545-f5470d04e4b1","Type":"ContainerDied","Data":"e3e17d0ddd4eec0cafed36151a5232073406435ca20bd2f1670ac73fa8ca5e1b"} Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.216141 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" event={"ID":"be66d38d-792d-4cff-a545-f5470d04e4b1","Type":"ContainerStarted","Data":"2e08b10ec0a4ae8b643a82984ae80f92b613c71cfa770425dbdd09103eef039f"} Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.217916 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hthqk" event={"ID":"42134f25-23a2-4498-8506-81215398022e","Type":"ContainerStarted","Data":"406332a4859c7bb203d18055839ff8fcf9d81939cabffe256c7a6114b4aef576"} Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.219404 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"56aa6357a08f9fb8cc25208688f5e67ad64afffa48b6a94bc16e383f51568914"} Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.221042 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-4tdb8" event={"ID":"dfb21daf-e0ba-4deb-9f8c-45645740ec01","Type":"ContainerStarted","Data":"c8fea8073c80de210a9f65187739356570065eb7ebd584d34ba9e87a4a144864"} Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.221081 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-4tdb8" event={"ID":"dfb21daf-e0ba-4deb-9f8c-45645740ec01","Type":"ContainerStarted","Data":"0a9be884767b3b493144abdf219cecc3d834c5e3429f4ad8271628d78703791c"} Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.222773 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-88wst" event={"ID":"4620190f-fea2-4e88-8a94-8e1bd1e1db12","Type":"ContainerStarted","Data":"e2854a79d8d65003d5338aa687bcf769b3d1b6a6adfd16c3107415ffca953334"} Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.222799 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-88wst" event={"ID":"4620190f-fea2-4e88-8a94-8e1bd1e1db12","Type":"ContainerStarted","Data":"3ac81f6d12ca007b4df78462924c542cd05b380336744c2369659da7b3d6d554"} Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.224050 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"b62a3a645670ef890a58008a3eb31e71703ccdc590cb58b4d99c6e2362d93c75"} Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.224081 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"963a89f5e88000797707667c23f0cb035fe08bd44bf23399c2c3ae4103e765d1"} Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.225257 5099 generic.go:358] "Generic (PLEG): container finished" podID="35840520-7eec-4a39-8370-7bd619fcf74b" containerID="b8e72249cfbdd8773a331afc8725cd79b10b59df25ff2cf1559037635bbdbf7a" exitCode=0 Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.225315 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m2v9k" event={"ID":"35840520-7eec-4a39-8370-7bd619fcf74b","Type":"ContainerDied","Data":"b8e72249cfbdd8773a331afc8725cd79b10b59df25ff2cf1559037635bbdbf7a"} Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.225330 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m2v9k" event={"ID":"35840520-7eec-4a39-8370-7bd619fcf74b","Type":"ContainerStarted","Data":"218f69ebc04a6a32bde3bcec9c652f068beda454178d56effee0d5d318c387d0"} Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.227005 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ddjv2" event={"ID":"a360a173-34a8-483e-8e75-c23a59b15b83","Type":"ContainerStarted","Data":"d04cd448cf54b16d774d57d625e472abe4f1d6ca03fe68a90f5e9155f367e4ce"} Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.228866 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jxk5p" event={"ID":"d146eaed-a126-49d9-9dc2-3cebf5ecd5b9","Type":"ContainerStarted","Data":"b7ac3ad69bef95eef5eed2cbbb9820cd8977526cb0cd821b1596f63f5f0e6801"} Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.228903 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jxk5p" event={"ID":"d146eaed-a126-49d9-9dc2-3cebf5ecd5b9","Type":"ContainerStarted","Data":"defccd63638ecb58e4945b421e553f36d55fc4a6db12270f24c608a86a5c37f7"} Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.228914 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jxk5p" event={"ID":"d146eaed-a126-49d9-9dc2-3cebf5ecd5b9","Type":"ContainerStarted","Data":"ae8a217c1cd5b0131795054d49588fdf360d71835f29ce0a25c3e2dcdcacd23d"} Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.245682 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=2.245662688 podStartE2EDuration="2.245662688s" podCreationTimestamp="2026-01-22 14:15:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:15:40.781545374 +0000 UTC m=+98.489295621" watchObservedRunningTime="2026-01-22 14:15:41.245662688 +0000 UTC m=+98.953412935" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.249365 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.249668 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.249679 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.249696 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.249706 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:41Z","lastTransitionTime":"2026-01-22T14:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.258307 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=2.258290651 podStartE2EDuration="2.258290651s" podCreationTimestamp="2026-01-22 14:15:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:15:41.257829038 +0000 UTC m=+98.965579295" watchObservedRunningTime="2026-01-22 14:15:41.258290651 +0000 UTC m=+98.966040888" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.314578 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=2.314553739 podStartE2EDuration="2.314553739s" podCreationTimestamp="2026-01-22 14:15:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:15:41.314535328 +0000 UTC m=+99.022285565" watchObservedRunningTime="2026-01-22 14:15:41.314553739 +0000 UTC m=+99.022303976" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.355579 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.355626 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.355637 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.355656 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.355669 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:41Z","lastTransitionTime":"2026-01-22T14:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.415865 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=2.415843269 podStartE2EDuration="2.415843269s" podCreationTimestamp="2026-01-22 14:15:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:15:41.41550856 +0000 UTC m=+99.123258807" watchObservedRunningTime="2026-01-22 14:15:41.415843269 +0000 UTC m=+99.123593506" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.452830 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.452978 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.453014 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.453050 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.453081 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 14:15:41 crc kubenswrapper[5099]: E0122 14:15:41.453227 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 14:15:41 crc kubenswrapper[5099]: E0122 14:15:41.453252 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 14:15:41 crc kubenswrapper[5099]: E0122 14:15:41.453265 5099 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:15:41 crc kubenswrapper[5099]: E0122 14:15:41.453326 5099 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 14:15:41 crc kubenswrapper[5099]: E0122 14:15:41.453337 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-22 14:15:43.453317227 +0000 UTC m=+101.161067464 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:15:41 crc kubenswrapper[5099]: E0122 14:15:41.453510 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 14:15:43.453490222 +0000 UTC m=+101.161240459 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 14:15:41 crc kubenswrapper[5099]: E0122 14:15:41.453523 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:15:43.453515532 +0000 UTC m=+101.161265769 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:15:41 crc kubenswrapper[5099]: E0122 14:15:41.453607 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 14:15:41 crc kubenswrapper[5099]: E0122 14:15:41.453619 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 14:15:41 crc kubenswrapper[5099]: E0122 14:15:41.453635 5099 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:15:41 crc kubenswrapper[5099]: E0122 14:15:41.453674 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-22 14:15:43.453663986 +0000 UTC m=+101.161414273 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:15:41 crc kubenswrapper[5099]: E0122 14:15:41.453730 5099 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 14:15:41 crc kubenswrapper[5099]: E0122 14:15:41.453761 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 14:15:43.453752858 +0000 UTC m=+101.161503146 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.457643 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.457688 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.457699 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.457713 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.457723 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:41Z","lastTransitionTime":"2026-01-22T14:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.504738 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=2.504720433 podStartE2EDuration="2.504720433s" podCreationTimestamp="2026-01-22 14:15:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:15:41.504147237 +0000 UTC m=+99.211897474" watchObservedRunningTime="2026-01-22 14:15:41.504720433 +0000 UTC m=+99.212470670" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.533600 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-88wst" podStartSLOduration=78.533569046 podStartE2EDuration="1m18.533569046s" podCreationTimestamp="2026-01-22 14:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:15:41.532698383 +0000 UTC m=+99.240448620" watchObservedRunningTime="2026-01-22 14:15:41.533569046 +0000 UTC m=+99.241319283" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.553717 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/47a33b1f-9d8a-4a87-9d5b-15c2b36959df-metrics-certs\") pod \"network-metrics-daemon-6qncx\" (UID: \"47a33b1f-9d8a-4a87-9d5b-15c2b36959df\") " pod="openshift-multus/network-metrics-daemon-6qncx" Jan 22 14:15:41 crc kubenswrapper[5099]: E0122 14:15:41.553891 5099 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 14:15:41 crc kubenswrapper[5099]: E0122 14:15:41.554045 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47a33b1f-9d8a-4a87-9d5b-15c2b36959df-metrics-certs podName:47a33b1f-9d8a-4a87-9d5b-15c2b36959df nodeName:}" failed. No retries permitted until 2026-01-22 14:15:43.554021112 +0000 UTC m=+101.261771409 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/47a33b1f-9d8a-4a87-9d5b-15c2b36959df-metrics-certs") pod "network-metrics-daemon-6qncx" (UID: "47a33b1f-9d8a-4a87-9d5b-15c2b36959df") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.559882 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.559947 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.559960 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.559982 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.559995 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:41Z","lastTransitionTime":"2026-01-22T14:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.577368 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-ddjv2" podStartSLOduration=78.577346806 podStartE2EDuration="1m18.577346806s" podCreationTimestamp="2026-01-22 14:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:15:41.553505958 +0000 UTC m=+99.261256195" watchObservedRunningTime="2026-01-22 14:15:41.577346806 +0000 UTC m=+99.285097053" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.578104 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-4tdb8" podStartSLOduration=77.578097285 podStartE2EDuration="1m17.578097285s" podCreationTimestamp="2026-01-22 14:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:15:41.577196001 +0000 UTC m=+99.284946238" watchObservedRunningTime="2026-01-22 14:15:41.578097285 +0000 UTC m=+99.285847522" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.623319 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jxk5p" podStartSLOduration=77.623295553 podStartE2EDuration="1m17.623295553s" podCreationTimestamp="2026-01-22 14:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:15:41.622590934 +0000 UTC m=+99.330341171" watchObservedRunningTime="2026-01-22 14:15:41.623295553 +0000 UTC m=+99.331045790" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.661712 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.661755 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.661767 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.661784 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.661796 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:41Z","lastTransitionTime":"2026-01-22T14:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.760817 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 14:15:41 crc kubenswrapper[5099]: E0122 14:15:41.761445 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.760927 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 14:15:41 crc kubenswrapper[5099]: E0122 14:15:41.761535 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.760892 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6qncx" Jan 22 14:15:41 crc kubenswrapper[5099]: E0122 14:15:41.761609 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6qncx" podUID="47a33b1f-9d8a-4a87-9d5b-15c2b36959df" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.760929 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:15:41 crc kubenswrapper[5099]: E0122 14:15:41.761676 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.771645 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.771691 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.771712 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.771731 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.771743 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:41Z","lastTransitionTime":"2026-01-22T14:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.795114 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-hthqk" podStartSLOduration=78.795097188 podStartE2EDuration="1m18.795097188s" podCreationTimestamp="2026-01-22 14:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:15:41.738696417 +0000 UTC m=+99.446446674" watchObservedRunningTime="2026-01-22 14:15:41.795097188 +0000 UTC m=+99.502847425" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.873325 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.873370 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.873383 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.873399 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.873413 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:41Z","lastTransitionTime":"2026-01-22T14:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.975963 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.976019 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.976029 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.976047 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:41 crc kubenswrapper[5099]: I0122 14:15:41.976056 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:41Z","lastTransitionTime":"2026-01-22T14:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.078487 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.078869 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.078882 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.078909 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.078920 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:42Z","lastTransitionTime":"2026-01-22T14:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.181396 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.181444 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.181459 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.181480 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.181490 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:42Z","lastTransitionTime":"2026-01-22T14:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.233308 5099 generic.go:358] "Generic (PLEG): container finished" podID="35840520-7eec-4a39-8370-7bd619fcf74b" containerID="0600e8ee425e3eca9a61806ee14e7c289932310cbe477e04058d11691d5fdf7b" exitCode=0 Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.233395 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m2v9k" event={"ID":"35840520-7eec-4a39-8370-7bd619fcf74b","Type":"ContainerDied","Data":"0600e8ee425e3eca9a61806ee14e7c289932310cbe477e04058d11691d5fdf7b"} Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.241042 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" event={"ID":"be66d38d-792d-4cff-a545-f5470d04e4b1","Type":"ContainerStarted","Data":"abe6ab9db13c04524d4ea3832d4bb5eed3457905709638aea5affe8ed78c5589"} Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.241097 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" event={"ID":"be66d38d-792d-4cff-a545-f5470d04e4b1","Type":"ContainerStarted","Data":"22a45bee52bfd1bb56055b24c779d7636d99271112a5e1e51211186b2de38b5c"} Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.241106 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" event={"ID":"be66d38d-792d-4cff-a545-f5470d04e4b1","Type":"ContainerStarted","Data":"2525d47fe46113cbee7f9d171775611c0e609ae59c562e17f518667c9c640796"} Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.241115 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" event={"ID":"be66d38d-792d-4cff-a545-f5470d04e4b1","Type":"ContainerStarted","Data":"86c1beaad455dc8e240bd23040cd301310ee91a1fda3d157a7323b9e5522f4dd"} Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.241123 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" event={"ID":"be66d38d-792d-4cff-a545-f5470d04e4b1","Type":"ContainerStarted","Data":"4407e55caf85d5a24a3105123c4d77dbf866c71f9e4ddb8285b9839772dec257"} Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.241131 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" event={"ID":"be66d38d-792d-4cff-a545-f5470d04e4b1","Type":"ContainerStarted","Data":"10e70ea9bf24b2273f9aa4d29ab4880550d8c7206710251fbeb57ce6c2a92ae5"} Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.284491 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.284540 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.284552 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.284569 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.284581 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:42Z","lastTransitionTime":"2026-01-22T14:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.387025 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.387069 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.387079 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.387093 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.387103 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:42Z","lastTransitionTime":"2026-01-22T14:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.489031 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.489072 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.489081 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.489094 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.489103 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:42Z","lastTransitionTime":"2026-01-22T14:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.591535 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.591576 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.591584 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.591601 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.591613 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:42Z","lastTransitionTime":"2026-01-22T14:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.693606 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.693896 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.693955 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.694021 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.694083 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:42Z","lastTransitionTime":"2026-01-22T14:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.797111 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.797185 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.797203 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.797224 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.797241 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:42Z","lastTransitionTime":"2026-01-22T14:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.899075 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.899123 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.899132 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.899147 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:42 crc kubenswrapper[5099]: I0122 14:15:42.899156 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:42Z","lastTransitionTime":"2026-01-22T14:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.001554 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.001615 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.001630 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.001665 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.001674 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:43Z","lastTransitionTime":"2026-01-22T14:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.104448 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.104489 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.104497 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.104511 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.104520 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:43Z","lastTransitionTime":"2026-01-22T14:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.206508 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.206545 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.206554 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.206567 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.206576 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:43Z","lastTransitionTime":"2026-01-22T14:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.246795 5099 generic.go:358] "Generic (PLEG): container finished" podID="35840520-7eec-4a39-8370-7bd619fcf74b" containerID="51fb3a10012b81eb02656add1b52477a638a335a32c0d1d7f5bbeb242ba8cec7" exitCode=0 Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.246900 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m2v9k" event={"ID":"35840520-7eec-4a39-8370-7bd619fcf74b","Type":"ContainerDied","Data":"51fb3a10012b81eb02656add1b52477a638a335a32c0d1d7f5bbeb242ba8cec7"} Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.248913 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"1be8ea41f2b3d6dfb309ead928210c2a875c085991fcc55a8c2f46953fe8e4f8"} Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.309170 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.309221 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.309231 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.309248 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.309259 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:43Z","lastTransitionTime":"2026-01-22T14:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.411655 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.411706 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.411717 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.411731 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.411741 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:43Z","lastTransitionTime":"2026-01-22T14:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.476127 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.476298 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:15:43 crc kubenswrapper[5099]: E0122 14:15:43.476312 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:15:47.476286974 +0000 UTC m=+105.184037221 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.476366 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.476398 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 14:15:43 crc kubenswrapper[5099]: E0122 14:15:43.476405 5099 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.476445 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:15:43 crc kubenswrapper[5099]: E0122 14:15:43.476451 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 14:15:47.476442828 +0000 UTC m=+105.184193065 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 14:15:43 crc kubenswrapper[5099]: E0122 14:15:43.476503 5099 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 14:15:43 crc kubenswrapper[5099]: E0122 14:15:43.476530 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 14:15:47.47652426 +0000 UTC m=+105.184274497 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 14:15:43 crc kubenswrapper[5099]: E0122 14:15:43.476585 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 14:15:43 crc kubenswrapper[5099]: E0122 14:15:43.476596 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 14:15:43 crc kubenswrapper[5099]: E0122 14:15:43.476606 5099 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:15:43 crc kubenswrapper[5099]: E0122 14:15:43.476629 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-22 14:15:47.476622063 +0000 UTC m=+105.184372300 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:15:43 crc kubenswrapper[5099]: E0122 14:15:43.476664 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 14:15:43 crc kubenswrapper[5099]: E0122 14:15:43.476672 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 14:15:43 crc kubenswrapper[5099]: E0122 14:15:43.476678 5099 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:15:43 crc kubenswrapper[5099]: E0122 14:15:43.476697 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-22 14:15:47.476691345 +0000 UTC m=+105.184441582 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.513979 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.514033 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.514043 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.514059 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.514071 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:43Z","lastTransitionTime":"2026-01-22T14:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.577392 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/47a33b1f-9d8a-4a87-9d5b-15c2b36959df-metrics-certs\") pod \"network-metrics-daemon-6qncx\" (UID: \"47a33b1f-9d8a-4a87-9d5b-15c2b36959df\") " pod="openshift-multus/network-metrics-daemon-6qncx" Jan 22 14:15:43 crc kubenswrapper[5099]: E0122 14:15:43.577577 5099 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 14:15:43 crc kubenswrapper[5099]: E0122 14:15:43.577662 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47a33b1f-9d8a-4a87-9d5b-15c2b36959df-metrics-certs podName:47a33b1f-9d8a-4a87-9d5b-15c2b36959df nodeName:}" failed. No retries permitted until 2026-01-22 14:15:47.577642256 +0000 UTC m=+105.285392493 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/47a33b1f-9d8a-4a87-9d5b-15c2b36959df-metrics-certs") pod "network-metrics-daemon-6qncx" (UID: "47a33b1f-9d8a-4a87-9d5b-15c2b36959df") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.617153 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.617222 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.617235 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.617251 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.617262 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:43Z","lastTransitionTime":"2026-01-22T14:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.719146 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.719215 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.719225 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.719240 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.719250 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:43Z","lastTransitionTime":"2026-01-22T14:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.761105 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.761105 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6qncx" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.761131 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 14:15:43 crc kubenswrapper[5099]: E0122 14:15:43.761277 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 14:15:43 crc kubenswrapper[5099]: E0122 14:15:43.761349 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6qncx" podUID="47a33b1f-9d8a-4a87-9d5b-15c2b36959df" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.761358 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 14:15:43 crc kubenswrapper[5099]: E0122 14:15:43.761459 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 14:15:43 crc kubenswrapper[5099]: E0122 14:15:43.761552 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.821888 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.821948 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.821963 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.821982 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.821994 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:43Z","lastTransitionTime":"2026-01-22T14:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.925145 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.925269 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.925288 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.925310 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:43 crc kubenswrapper[5099]: I0122 14:15:43.925322 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:43Z","lastTransitionTime":"2026-01-22T14:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.027560 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.027604 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.027615 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.027629 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.027639 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:44Z","lastTransitionTime":"2026-01-22T14:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.130133 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.130207 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.130221 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.130240 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.130255 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:44Z","lastTransitionTime":"2026-01-22T14:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.232477 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.232529 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.232541 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.232559 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.232572 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:44Z","lastTransitionTime":"2026-01-22T14:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.253645 5099 generic.go:358] "Generic (PLEG): container finished" podID="35840520-7eec-4a39-8370-7bd619fcf74b" containerID="556e197e5fc9ebcf93b38392d514ba5e87d53d0cf49c4d750953e363735896ad" exitCode=0 Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.253681 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m2v9k" event={"ID":"35840520-7eec-4a39-8370-7bd619fcf74b","Type":"ContainerDied","Data":"556e197e5fc9ebcf93b38392d514ba5e87d53d0cf49c4d750953e363735896ad"} Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.334999 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.335056 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.335065 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.335078 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.335087 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:44Z","lastTransitionTime":"2026-01-22T14:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.436901 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.436985 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.436998 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.437014 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.437025 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:44Z","lastTransitionTime":"2026-01-22T14:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.538528 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.538569 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.538579 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.538593 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.538603 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:44Z","lastTransitionTime":"2026-01-22T14:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.640804 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.640862 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.640875 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.640892 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.640903 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:44Z","lastTransitionTime":"2026-01-22T14:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.742995 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.743044 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.743055 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.743072 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.743083 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:44Z","lastTransitionTime":"2026-01-22T14:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.855877 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.855958 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.855971 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.855990 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.856002 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:44Z","lastTransitionTime":"2026-01-22T14:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.958924 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.959144 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.959220 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.959241 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:44 crc kubenswrapper[5099]: I0122 14:15:44.959253 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:44Z","lastTransitionTime":"2026-01-22T14:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.060822 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.060862 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.060872 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.060887 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.060895 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:45Z","lastTransitionTime":"2026-01-22T14:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.163094 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.163151 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.163183 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.163202 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.163214 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:45Z","lastTransitionTime":"2026-01-22T14:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.260145 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" event={"ID":"be66d38d-792d-4cff-a545-f5470d04e4b1","Type":"ContainerStarted","Data":"4a82c79b9b22dd2a0d73f6dad564abbb62056ada9eb7e92c786e2d3eeab13796"} Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.262974 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m2v9k" event={"ID":"35840520-7eec-4a39-8370-7bd619fcf74b","Type":"ContainerStarted","Data":"d02256764e226316a0782b3a460e4f24c13682699c19b52514d8a4e70430242e"} Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.264367 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.264402 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.264413 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.264428 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.264439 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:45Z","lastTransitionTime":"2026-01-22T14:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.366983 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.367047 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.367066 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.367088 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.367105 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:45Z","lastTransitionTime":"2026-01-22T14:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.469307 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.469372 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.469394 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.469428 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.469452 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:45Z","lastTransitionTime":"2026-01-22T14:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.572672 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.572737 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.572749 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.572767 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.572778 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:45Z","lastTransitionTime":"2026-01-22T14:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.675584 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.675644 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.675656 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.675675 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.675687 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:45Z","lastTransitionTime":"2026-01-22T14:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.760677 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.760720 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6qncx" Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.760734 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.760742 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 14:15:45 crc kubenswrapper[5099]: E0122 14:15:45.760853 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 14:15:45 crc kubenswrapper[5099]: E0122 14:15:45.760936 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 14:15:45 crc kubenswrapper[5099]: E0122 14:15:45.760952 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6qncx" podUID="47a33b1f-9d8a-4a87-9d5b-15c2b36959df" Jan 22 14:15:45 crc kubenswrapper[5099]: E0122 14:15:45.761098 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.778042 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.778105 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.778117 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.778140 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.778152 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:45Z","lastTransitionTime":"2026-01-22T14:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.788832 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.788880 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.788893 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.788908 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.788919 5099 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:15:45Z","lastTransitionTime":"2026-01-22T14:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:15:45 crc kubenswrapper[5099]: I0122 14:15:45.833318 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-hbj86"] Jan 22 14:15:46 crc kubenswrapper[5099]: I0122 14:15:46.269855 5099 generic.go:358] "Generic (PLEG): container finished" podID="35840520-7eec-4a39-8370-7bd619fcf74b" containerID="d02256764e226316a0782b3a460e4f24c13682699c19b52514d8a4e70430242e" exitCode=0 Jan 22 14:15:46 crc kubenswrapper[5099]: I0122 14:15:46.601633 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m2v9k" event={"ID":"35840520-7eec-4a39-8370-7bd619fcf74b","Type":"ContainerDied","Data":"d02256764e226316a0782b3a460e4f24c13682699c19b52514d8a4e70430242e"} Jan 22 14:15:46 crc kubenswrapper[5099]: I0122 14:15:46.602254 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-hbj86" Jan 22 14:15:46 crc kubenswrapper[5099]: I0122 14:15:46.604764 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 22 14:15:46 crc kubenswrapper[5099]: I0122 14:15:46.605087 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 22 14:15:46 crc kubenswrapper[5099]: I0122 14:15:46.605282 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 22 14:15:46 crc kubenswrapper[5099]: I0122 14:15:46.606643 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 22 14:15:46 crc kubenswrapper[5099]: I0122 14:15:46.713379 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d31bb8d0-696b-405b-8706-9aae14ddc41c-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-hbj86\" (UID: \"d31bb8d0-696b-405b-8706-9aae14ddc41c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-hbj86" Jan 22 14:15:46 crc kubenswrapper[5099]: I0122 14:15:46.713439 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d31bb8d0-696b-405b-8706-9aae14ddc41c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-hbj86\" (UID: \"d31bb8d0-696b-405b-8706-9aae14ddc41c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-hbj86" Jan 22 14:15:46 crc kubenswrapper[5099]: I0122 14:15:46.713463 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d31bb8d0-696b-405b-8706-9aae14ddc41c-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-hbj86\" (UID: \"d31bb8d0-696b-405b-8706-9aae14ddc41c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-hbj86" Jan 22 14:15:46 crc kubenswrapper[5099]: I0122 14:15:46.713515 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d31bb8d0-696b-405b-8706-9aae14ddc41c-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-hbj86\" (UID: \"d31bb8d0-696b-405b-8706-9aae14ddc41c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-hbj86" Jan 22 14:15:46 crc kubenswrapper[5099]: I0122 14:15:46.713566 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d31bb8d0-696b-405b-8706-9aae14ddc41c-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-hbj86\" (UID: \"d31bb8d0-696b-405b-8706-9aae14ddc41c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-hbj86" Jan 22 14:15:46 crc kubenswrapper[5099]: I0122 14:15:46.772643 5099 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Jan 22 14:15:46 crc kubenswrapper[5099]: I0122 14:15:46.779745 5099 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 22 14:15:46 crc kubenswrapper[5099]: I0122 14:15:46.814886 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d31bb8d0-696b-405b-8706-9aae14ddc41c-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-hbj86\" (UID: \"d31bb8d0-696b-405b-8706-9aae14ddc41c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-hbj86" Jan 22 14:15:46 crc kubenswrapper[5099]: I0122 14:15:46.814933 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d31bb8d0-696b-405b-8706-9aae14ddc41c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-hbj86\" (UID: \"d31bb8d0-696b-405b-8706-9aae14ddc41c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-hbj86" Jan 22 14:15:46 crc kubenswrapper[5099]: I0122 14:15:46.814959 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d31bb8d0-696b-405b-8706-9aae14ddc41c-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-hbj86\" (UID: \"d31bb8d0-696b-405b-8706-9aae14ddc41c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-hbj86" Jan 22 14:15:46 crc kubenswrapper[5099]: I0122 14:15:46.814992 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d31bb8d0-696b-405b-8706-9aae14ddc41c-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-hbj86\" (UID: \"d31bb8d0-696b-405b-8706-9aae14ddc41c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-hbj86" Jan 22 14:15:46 crc kubenswrapper[5099]: I0122 14:15:46.815028 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d31bb8d0-696b-405b-8706-9aae14ddc41c-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-hbj86\" (UID: \"d31bb8d0-696b-405b-8706-9aae14ddc41c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-hbj86" Jan 22 14:15:46 crc kubenswrapper[5099]: I0122 14:15:46.815674 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d31bb8d0-696b-405b-8706-9aae14ddc41c-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-hbj86\" (UID: \"d31bb8d0-696b-405b-8706-9aae14ddc41c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-hbj86" Jan 22 14:15:46 crc kubenswrapper[5099]: I0122 14:15:46.815715 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d31bb8d0-696b-405b-8706-9aae14ddc41c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-hbj86\" (UID: \"d31bb8d0-696b-405b-8706-9aae14ddc41c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-hbj86" Jan 22 14:15:46 crc kubenswrapper[5099]: I0122 14:15:46.817426 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d31bb8d0-696b-405b-8706-9aae14ddc41c-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-hbj86\" (UID: \"d31bb8d0-696b-405b-8706-9aae14ddc41c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-hbj86" Jan 22 14:15:46 crc kubenswrapper[5099]: I0122 14:15:46.822046 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d31bb8d0-696b-405b-8706-9aae14ddc41c-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-hbj86\" (UID: \"d31bb8d0-696b-405b-8706-9aae14ddc41c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-hbj86" Jan 22 14:15:46 crc kubenswrapper[5099]: I0122 14:15:46.832765 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d31bb8d0-696b-405b-8706-9aae14ddc41c-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-hbj86\" (UID: \"d31bb8d0-696b-405b-8706-9aae14ddc41c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-hbj86" Jan 22 14:15:46 crc kubenswrapper[5099]: I0122 14:15:46.917338 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-hbj86" Jan 22 14:15:46 crc kubenswrapper[5099]: W0122 14:15:46.929397 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd31bb8d0_696b_405b_8706_9aae14ddc41c.slice/crio-8e20b2205873660095bccc8d7a1d0bcbcc19171dc4bb9a6b40fb718e2badf708 WatchSource:0}: Error finding container 8e20b2205873660095bccc8d7a1d0bcbcc19171dc4bb9a6b40fb718e2badf708: Status 404 returned error can't find the container with id 8e20b2205873660095bccc8d7a1d0bcbcc19171dc4bb9a6b40fb718e2badf708 Jan 22 14:15:47 crc kubenswrapper[5099]: I0122 14:15:47.273418 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-hbj86" event={"ID":"d31bb8d0-696b-405b-8706-9aae14ddc41c","Type":"ContainerStarted","Data":"8e20b2205873660095bccc8d7a1d0bcbcc19171dc4bb9a6b40fb718e2badf708"} Jan 22 14:15:47 crc kubenswrapper[5099]: I0122 14:15:47.277018 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m2v9k" event={"ID":"35840520-7eec-4a39-8370-7bd619fcf74b","Type":"ContainerStarted","Data":"583f99ba2b43389c97cdcc6e00c88ad7789971ad3159182b869ced38253eb4e6"} Jan 22 14:15:47 crc kubenswrapper[5099]: I0122 14:15:47.282105 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" event={"ID":"be66d38d-792d-4cff-a545-f5470d04e4b1","Type":"ContainerStarted","Data":"4377efde777de951a8ff26e72d0cd663e471827eb1a4bd1015812189b244eb58"} Jan 22 14:15:47 crc kubenswrapper[5099]: I0122 14:15:47.282512 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:47 crc kubenswrapper[5099]: I0122 14:15:47.282606 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:47 crc kubenswrapper[5099]: I0122 14:15:47.282627 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:47 crc kubenswrapper[5099]: I0122 14:15:47.305055 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" podStartSLOduration=84.305035799 podStartE2EDuration="1m24.305035799s" podCreationTimestamp="2026-01-22 14:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:15:47.304566977 +0000 UTC m=+105.012317264" watchObservedRunningTime="2026-01-22 14:15:47.305035799 +0000 UTC m=+105.012786036" Jan 22 14:15:47 crc kubenswrapper[5099]: I0122 14:15:47.415377 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:47 crc kubenswrapper[5099]: I0122 14:15:47.419138 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:15:47 crc kubenswrapper[5099]: I0122 14:15:47.522149 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:15:47 crc kubenswrapper[5099]: I0122 14:15:47.522306 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:15:47 crc kubenswrapper[5099]: E0122 14:15:47.522407 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:15:55.522373132 +0000 UTC m=+113.230123389 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:15:47 crc kubenswrapper[5099]: I0122 14:15:47.522339 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:15:47 crc kubenswrapper[5099]: E0122 14:15:47.522425 5099 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 14:15:47 crc kubenswrapper[5099]: E0122 14:15:47.522569 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 14:15:55.522547027 +0000 UTC m=+113.230297334 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 14:15:47 crc kubenswrapper[5099]: E0122 14:15:47.522604 5099 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 14:15:47 crc kubenswrapper[5099]: E0122 14:15:47.522707 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 14:15:55.522685801 +0000 UTC m=+113.230436038 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 14:15:47 crc kubenswrapper[5099]: E0122 14:15:47.522740 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 14:15:47 crc kubenswrapper[5099]: E0122 14:15:47.522794 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 14:15:47 crc kubenswrapper[5099]: E0122 14:15:47.522808 5099 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:15:47 crc kubenswrapper[5099]: E0122 14:15:47.522933 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-22 14:15:55.522918737 +0000 UTC m=+113.230668984 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:15:47 crc kubenswrapper[5099]: I0122 14:15:47.522967 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 14:15:47 crc kubenswrapper[5099]: I0122 14:15:47.523010 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 14:15:47 crc kubenswrapper[5099]: E0122 14:15:47.523119 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 14:15:47 crc kubenswrapper[5099]: E0122 14:15:47.523178 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 14:15:47 crc kubenswrapper[5099]: E0122 14:15:47.523190 5099 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:15:47 crc kubenswrapper[5099]: E0122 14:15:47.523231 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-22 14:15:55.523219395 +0000 UTC m=+113.230969712 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:15:47 crc kubenswrapper[5099]: I0122 14:15:47.623628 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/47a33b1f-9d8a-4a87-9d5b-15c2b36959df-metrics-certs\") pod \"network-metrics-daemon-6qncx\" (UID: \"47a33b1f-9d8a-4a87-9d5b-15c2b36959df\") " pod="openshift-multus/network-metrics-daemon-6qncx" Jan 22 14:15:47 crc kubenswrapper[5099]: E0122 14:15:47.623798 5099 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 14:15:47 crc kubenswrapper[5099]: E0122 14:15:47.623885 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47a33b1f-9d8a-4a87-9d5b-15c2b36959df-metrics-certs podName:47a33b1f-9d8a-4a87-9d5b-15c2b36959df nodeName:}" failed. No retries permitted until 2026-01-22 14:15:55.623867118 +0000 UTC m=+113.331617365 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/47a33b1f-9d8a-4a87-9d5b-15c2b36959df-metrics-certs") pod "network-metrics-daemon-6qncx" (UID: "47a33b1f-9d8a-4a87-9d5b-15c2b36959df") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 14:15:47 crc kubenswrapper[5099]: I0122 14:15:47.760909 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 14:15:47 crc kubenswrapper[5099]: I0122 14:15:47.760944 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6qncx" Jan 22 14:15:47 crc kubenswrapper[5099]: E0122 14:15:47.761031 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 14:15:47 crc kubenswrapper[5099]: I0122 14:15:47.761048 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:15:47 crc kubenswrapper[5099]: E0122 14:15:47.761130 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 14:15:47 crc kubenswrapper[5099]: I0122 14:15:47.761198 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 14:15:47 crc kubenswrapper[5099]: E0122 14:15:47.761298 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 14:15:47 crc kubenswrapper[5099]: E0122 14:15:47.761347 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6qncx" podUID="47a33b1f-9d8a-4a87-9d5b-15c2b36959df" Jan 22 14:15:48 crc kubenswrapper[5099]: I0122 14:15:48.289213 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-hbj86" event={"ID":"d31bb8d0-696b-405b-8706-9aae14ddc41c","Type":"ContainerStarted","Data":"b3fc5f8ed36cd50b765260c36c7ad7b58a489ffda4a22142230922af8f8e4fad"} Jan 22 14:15:48 crc kubenswrapper[5099]: I0122 14:15:48.292660 5099 generic.go:358] "Generic (PLEG): container finished" podID="35840520-7eec-4a39-8370-7bd619fcf74b" containerID="583f99ba2b43389c97cdcc6e00c88ad7789971ad3159182b869ced38253eb4e6" exitCode=0 Jan 22 14:15:48 crc kubenswrapper[5099]: I0122 14:15:48.293036 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m2v9k" event={"ID":"35840520-7eec-4a39-8370-7bd619fcf74b","Type":"ContainerDied","Data":"583f99ba2b43389c97cdcc6e00c88ad7789971ad3159182b869ced38253eb4e6"} Jan 22 14:15:48 crc kubenswrapper[5099]: I0122 14:15:48.303219 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-hbj86" podStartSLOduration=85.303201276 podStartE2EDuration="1m25.303201276s" podCreationTimestamp="2026-01-22 14:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:15:48.303106164 +0000 UTC m=+106.010856421" watchObservedRunningTime="2026-01-22 14:15:48.303201276 +0000 UTC m=+106.010951503" Jan 22 14:15:49 crc kubenswrapper[5099]: I0122 14:15:49.302919 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m2v9k" event={"ID":"35840520-7eec-4a39-8370-7bd619fcf74b","Type":"ContainerStarted","Data":"5840fdcbe514c15c766d2e6942367b16b2d4518f44ab8cd1f1fdf13613b37f8f"} Jan 22 14:15:49 crc kubenswrapper[5099]: I0122 14:15:49.358962 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-m2v9k" podStartSLOduration=86.358924746 podStartE2EDuration="1m26.358924746s" podCreationTimestamp="2026-01-22 14:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:15:49.329092766 +0000 UTC m=+107.036843003" watchObservedRunningTime="2026-01-22 14:15:49.358924746 +0000 UTC m=+107.066674983" Jan 22 14:15:49 crc kubenswrapper[5099]: I0122 14:15:49.360157 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-6qncx"] Jan 22 14:15:49 crc kubenswrapper[5099]: I0122 14:15:49.360358 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6qncx" Jan 22 14:15:49 crc kubenswrapper[5099]: E0122 14:15:49.360555 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6qncx" podUID="47a33b1f-9d8a-4a87-9d5b-15c2b36959df" Jan 22 14:15:49 crc kubenswrapper[5099]: I0122 14:15:49.760741 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 14:15:49 crc kubenswrapper[5099]: I0122 14:15:49.760774 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:15:49 crc kubenswrapper[5099]: E0122 14:15:49.760893 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 14:15:49 crc kubenswrapper[5099]: E0122 14:15:49.761023 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 14:15:49 crc kubenswrapper[5099]: I0122 14:15:49.761286 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 14:15:49 crc kubenswrapper[5099]: E0122 14:15:49.761362 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 14:15:50 crc kubenswrapper[5099]: I0122 14:15:50.760364 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6qncx" Jan 22 14:15:50 crc kubenswrapper[5099]: E0122 14:15:50.760522 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6qncx" podUID="47a33b1f-9d8a-4a87-9d5b-15c2b36959df" Jan 22 14:15:51 crc kubenswrapper[5099]: I0122 14:15:51.239375 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:15:51 crc kubenswrapper[5099]: I0122 14:15:51.760650 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 14:15:51 crc kubenswrapper[5099]: I0122 14:15:51.760706 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 14:15:51 crc kubenswrapper[5099]: I0122 14:15:51.760643 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:15:51 crc kubenswrapper[5099]: E0122 14:15:51.760794 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 14:15:51 crc kubenswrapper[5099]: E0122 14:15:51.760901 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 14:15:51 crc kubenswrapper[5099]: E0122 14:15:51.761051 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 14:15:52 crc kubenswrapper[5099]: I0122 14:15:52.763120 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6qncx" Jan 22 14:15:52 crc kubenswrapper[5099]: E0122 14:15:52.763284 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6qncx" podUID="47a33b1f-9d8a-4a87-9d5b-15c2b36959df" Jan 22 14:15:53 crc kubenswrapper[5099]: I0122 14:15:53.760224 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 14:15:53 crc kubenswrapper[5099]: E0122 14:15:53.760710 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 14:15:53 crc kubenswrapper[5099]: I0122 14:15:53.760394 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:15:53 crc kubenswrapper[5099]: E0122 14:15:53.761005 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 14:15:53 crc kubenswrapper[5099]: I0122 14:15:53.760412 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 14:15:53 crc kubenswrapper[5099]: E0122 14:15:53.761230 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 14:15:54 crc kubenswrapper[5099]: I0122 14:15:54.761425 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6qncx" Jan 22 14:15:54 crc kubenswrapper[5099]: E0122 14:15:54.761700 5099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6qncx" podUID="47a33b1f-9d8a-4a87-9d5b-15c2b36959df" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.304420 5099 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.304600 5099 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.346457 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-ldzlj"] Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.425080 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-q6pjq"] Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.428520 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-2pf7j"] Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.429327 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-q6pjq" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.433324 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-84v6d"] Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.433735 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-ldzlj" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.434966 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.435232 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.436595 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.436951 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.437345 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-g87pb"] Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.437464 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-2pf7j" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.437543 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.437774 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.437832 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-84v6d" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.437341 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.438600 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.442224 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-flxrl"] Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.442404 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.442414 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.442730 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.443338 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.443597 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.443619 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.444297 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.444642 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-rf4fl"] Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.445240 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-g87pb" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.446801 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.447104 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.447334 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.447117 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.449653 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.453191 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.453539 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.453718 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.453884 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.509046 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.555415 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.555549 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c829ef61-cc95-4adc-88a6-433ed349112d-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-q6pjq\" (UID: \"c829ef61-cc95-4adc-88a6-433ed349112d\") " pod="openshift-machine-api/machine-api-operator-755bb95488-q6pjq" Jan 22 14:15:55 crc kubenswrapper[5099]: E0122 14:15:55.555635 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:11.555584447 +0000 UTC m=+129.263334684 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.555712 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae5457f8-56c6-47bc-86eb-87dfc7cd63c9-serving-cert\") pod \"authentication-operator-7f5c659b84-84v6d\" (UID: \"ae5457f8-56c6-47bc-86eb-87dfc7cd63c9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-84v6d" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.555793 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c829ef61-cc95-4adc-88a6-433ed349112d-config\") pod \"machine-api-operator-755bb95488-q6pjq\" (UID: \"c829ef61-cc95-4adc-88a6-433ed349112d\") " pod="openshift-machine-api/machine-api-operator-755bb95488-q6pjq" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.555857 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65c18850-3f68-4603-955b-12a4ab882766-serving-cert\") pod \"openshift-config-operator-5777786469-2pf7j\" (UID: \"65c18850-3f68-4603-955b-12a4ab882766\") " pod="openshift-config-operator/openshift-config-operator-5777786469-2pf7j" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.555932 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8408c5c-b382-4045-96dd-4204e71d0798-config\") pod \"kube-controller-manager-operator-69d5f845f8-flxrl\" (UID: \"b8408c5c-b382-4045-96dd-4204e71d0798\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-flxrl" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.555989 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.556044 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.556085 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae5457f8-56c6-47bc-86eb-87dfc7cd63c9-config\") pod \"authentication-operator-7f5c659b84-84v6d\" (UID: \"ae5457f8-56c6-47bc-86eb-87dfc7cd63c9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-84v6d" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.556117 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c451b1af-aa09-4621-b237-72a053e5e347-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-g87pb\" (UID: \"c451b1af-aa09-4621-b237-72a053e5e347\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-g87pb" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.556140 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f2j9\" (UniqueName: \"kubernetes.io/projected/65c18850-3f68-4603-955b-12a4ab882766-kube-api-access-9f2j9\") pod \"openshift-config-operator-5777786469-2pf7j\" (UID: \"65c18850-3f68-4603-955b-12a4ab882766\") " pod="openshift-config-operator/openshift-config-operator-5777786469-2pf7j" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.556182 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae5457f8-56c6-47bc-86eb-87dfc7cd63c9-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-84v6d\" (UID: \"ae5457f8-56c6-47bc-86eb-87dfc7cd63c9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-84v6d" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.556209 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae5457f8-56c6-47bc-86eb-87dfc7cd63c9-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-84v6d\" (UID: \"ae5457f8-56c6-47bc-86eb-87dfc7cd63c9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-84v6d" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.556232 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnlrm\" (UniqueName: \"kubernetes.io/projected/c451b1af-aa09-4621-b237-72a053e5e347-kube-api-access-pnlrm\") pod \"ingress-operator-6b9cb4dbcf-g87pb\" (UID: \"c451b1af-aa09-4621-b237-72a053e5e347\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-g87pb" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.556261 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c451b1af-aa09-4621-b237-72a053e5e347-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-g87pb\" (UID: \"c451b1af-aa09-4621-b237-72a053e5e347\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-g87pb" Jan 22 14:15:55 crc kubenswrapper[5099]: E0122 14:15:55.556278 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 14:15:55 crc kubenswrapper[5099]: E0122 14:15:55.556311 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 14:15:55 crc kubenswrapper[5099]: E0122 14:15:55.556321 5099 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.556281 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwrlq\" (UniqueName: \"kubernetes.io/projected/c829ef61-cc95-4adc-88a6-433ed349112d-kube-api-access-gwrlq\") pod \"machine-api-operator-755bb95488-q6pjq\" (UID: \"c829ef61-cc95-4adc-88a6-433ed349112d\") " pod="openshift-machine-api/machine-api-operator-755bb95488-q6pjq" Jan 22 14:15:55 crc kubenswrapper[5099]: E0122 14:15:55.556396 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-22 14:16:11.556385439 +0000 UTC m=+129.264135676 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.556426 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c829ef61-cc95-4adc-88a6-433ed349112d-images\") pod \"machine-api-operator-755bb95488-q6pjq\" (UID: \"c829ef61-cc95-4adc-88a6-433ed349112d\") " pod="openshift-machine-api/machine-api-operator-755bb95488-q6pjq" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.556465 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c451b1af-aa09-4621-b237-72a053e5e347-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-g87pb\" (UID: \"c451b1af-aa09-4621-b237-72a053e5e347\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-g87pb" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.556485 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b8408c5c-b382-4045-96dd-4204e71d0798-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-flxrl\" (UID: \"b8408c5c-b382-4045-96dd-4204e71d0798\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-flxrl" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.556509 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.556547 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8408c5c-b382-4045-96dd-4204e71d0798-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-flxrl\" (UID: \"b8408c5c-b382-4045-96dd-4204e71d0798\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-flxrl" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.556565 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b8408c5c-b382-4045-96dd-4204e71d0798-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-flxrl\" (UID: \"b8408c5c-b382-4045-96dd-4204e71d0798\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-flxrl" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.556593 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42xhx\" (UniqueName: \"kubernetes.io/projected/ae5457f8-56c6-47bc-86eb-87dfc7cd63c9-kube-api-access-42xhx\") pod \"authentication-operator-7f5c659b84-84v6d\" (UID: \"ae5457f8-56c6-47bc-86eb-87dfc7cd63c9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-84v6d" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.556674 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.556692 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/65c18850-3f68-4603-955b-12a4ab882766-available-featuregates\") pod \"openshift-config-operator-5777786469-2pf7j\" (UID: \"65c18850-3f68-4603-955b-12a4ab882766\") " pod="openshift-config-operator/openshift-config-operator-5777786469-2pf7j" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.556728 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v7lg\" (UniqueName: \"kubernetes.io/projected/513664b2-32c9-4290-9ae7-2400a1c4da84-kube-api-access-2v7lg\") pod \"downloads-747b44746d-ldzlj\" (UID: \"513664b2-32c9-4290-9ae7-2400a1c4da84\") " pod="openshift-console/downloads-747b44746d-ldzlj" Jan 22 14:15:55 crc kubenswrapper[5099]: E0122 14:15:55.556751 5099 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 14:15:55 crc kubenswrapper[5099]: E0122 14:15:55.556823 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 14:16:11.55680502 +0000 UTC m=+129.264555257 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 14:15:55 crc kubenswrapper[5099]: E0122 14:15:55.556914 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 14:15:55 crc kubenswrapper[5099]: E0122 14:15:55.557003 5099 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 14:15:55 crc kubenswrapper[5099]: E0122 14:15:55.557053 5099 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:15:55 crc kubenswrapper[5099]: E0122 14:15:55.557399 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-22 14:16:11.557388645 +0000 UTC m=+129.265138882 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:15:55 crc kubenswrapper[5099]: E0122 14:15:55.557479 5099 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 14:15:55 crc kubenswrapper[5099]: E0122 14:15:55.557601 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 14:16:11.557587531 +0000 UTC m=+129.265337828 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.657688 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c451b1af-aa09-4621-b237-72a053e5e347-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-g87pb\" (UID: \"c451b1af-aa09-4621-b237-72a053e5e347\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-g87pb" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.657756 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9f2j9\" (UniqueName: \"kubernetes.io/projected/65c18850-3f68-4603-955b-12a4ab882766-kube-api-access-9f2j9\") pod \"openshift-config-operator-5777786469-2pf7j\" (UID: \"65c18850-3f68-4603-955b-12a4ab882766\") " pod="openshift-config-operator/openshift-config-operator-5777786469-2pf7j" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.657792 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae5457f8-56c6-47bc-86eb-87dfc7cd63c9-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-84v6d\" (UID: \"ae5457f8-56c6-47bc-86eb-87dfc7cd63c9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-84v6d" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.657829 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae5457f8-56c6-47bc-86eb-87dfc7cd63c9-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-84v6d\" (UID: \"ae5457f8-56c6-47bc-86eb-87dfc7cd63c9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-84v6d" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.657861 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pnlrm\" (UniqueName: \"kubernetes.io/projected/c451b1af-aa09-4621-b237-72a053e5e347-kube-api-access-pnlrm\") pod \"ingress-operator-6b9cb4dbcf-g87pb\" (UID: \"c451b1af-aa09-4621-b237-72a053e5e347\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-g87pb" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.657897 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c451b1af-aa09-4621-b237-72a053e5e347-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-g87pb\" (UID: \"c451b1af-aa09-4621-b237-72a053e5e347\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-g87pb" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.659230 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae5457f8-56c6-47bc-86eb-87dfc7cd63c9-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-84v6d\" (UID: \"ae5457f8-56c6-47bc-86eb-87dfc7cd63c9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-84v6d" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.659339 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae5457f8-56c6-47bc-86eb-87dfc7cd63c9-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-84v6d\" (UID: \"ae5457f8-56c6-47bc-86eb-87dfc7cd63c9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-84v6d" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.659390 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwrlq\" (UniqueName: \"kubernetes.io/projected/c829ef61-cc95-4adc-88a6-433ed349112d-kube-api-access-gwrlq\") pod \"machine-api-operator-755bb95488-q6pjq\" (UID: \"c829ef61-cc95-4adc-88a6-433ed349112d\") " pod="openshift-machine-api/machine-api-operator-755bb95488-q6pjq" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.659614 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c829ef61-cc95-4adc-88a6-433ed349112d-images\") pod \"machine-api-operator-755bb95488-q6pjq\" (UID: \"c829ef61-cc95-4adc-88a6-433ed349112d\") " pod="openshift-machine-api/machine-api-operator-755bb95488-q6pjq" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.659632 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c451b1af-aa09-4621-b237-72a053e5e347-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-g87pb\" (UID: \"c451b1af-aa09-4621-b237-72a053e5e347\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-g87pb" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.659562 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c451b1af-aa09-4621-b237-72a053e5e347-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-g87pb\" (UID: \"c451b1af-aa09-4621-b237-72a053e5e347\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-g87pb" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.659751 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b8408c5c-b382-4045-96dd-4204e71d0798-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-flxrl\" (UID: \"b8408c5c-b382-4045-96dd-4204e71d0798\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-flxrl" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.659791 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8408c5c-b382-4045-96dd-4204e71d0798-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-flxrl\" (UID: \"b8408c5c-b382-4045-96dd-4204e71d0798\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-flxrl" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.659909 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b8408c5c-b382-4045-96dd-4204e71d0798-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-flxrl\" (UID: \"b8408c5c-b382-4045-96dd-4204e71d0798\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-flxrl" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.659930 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-42xhx\" (UniqueName: \"kubernetes.io/projected/ae5457f8-56c6-47bc-86eb-87dfc7cd63c9-kube-api-access-42xhx\") pod \"authentication-operator-7f5c659b84-84v6d\" (UID: \"ae5457f8-56c6-47bc-86eb-87dfc7cd63c9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-84v6d" Jan 22 14:15:55 crc kubenswrapper[5099]: E0122 14:15:55.660045 5099 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.660043 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/65c18850-3f68-4603-955b-12a4ab882766-available-featuregates\") pod \"openshift-config-operator-5777786469-2pf7j\" (UID: \"65c18850-3f68-4603-955b-12a4ab882766\") " pod="openshift-config-operator/openshift-config-operator-5777786469-2pf7j" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.660111 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2v7lg\" (UniqueName: \"kubernetes.io/projected/513664b2-32c9-4290-9ae7-2400a1c4da84-kube-api-access-2v7lg\") pod \"downloads-747b44746d-ldzlj\" (UID: \"513664b2-32c9-4290-9ae7-2400a1c4da84\") " pod="openshift-console/downloads-747b44746d-ldzlj" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.660190 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c829ef61-cc95-4adc-88a6-433ed349112d-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-q6pjq\" (UID: \"c829ef61-cc95-4adc-88a6-433ed349112d\") " pod="openshift-machine-api/machine-api-operator-755bb95488-q6pjq" Jan 22 14:15:55 crc kubenswrapper[5099]: E0122 14:15:55.660266 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b8408c5c-b382-4045-96dd-4204e71d0798-serving-cert podName:b8408c5c-b382-4045-96dd-4204e71d0798 nodeName:}" failed. No retries permitted until 2026-01-22 14:15:56.160203958 +0000 UTC m=+113.867954205 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b8408c5c-b382-4045-96dd-4204e71d0798-serving-cert") pod "kube-controller-manager-operator-69d5f845f8-flxrl" (UID: "b8408c5c-b382-4045-96dd-4204e71d0798") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.660334 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae5457f8-56c6-47bc-86eb-87dfc7cd63c9-serving-cert\") pod \"authentication-operator-7f5c659b84-84v6d\" (UID: \"ae5457f8-56c6-47bc-86eb-87dfc7cd63c9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-84v6d" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.660341 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c829ef61-cc95-4adc-88a6-433ed349112d-images\") pod \"machine-api-operator-755bb95488-q6pjq\" (UID: \"c829ef61-cc95-4adc-88a6-433ed349112d\") " pod="openshift-machine-api/machine-api-operator-755bb95488-q6pjq" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.660415 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c829ef61-cc95-4adc-88a6-433ed349112d-config\") pod \"machine-api-operator-755bb95488-q6pjq\" (UID: \"c829ef61-cc95-4adc-88a6-433ed349112d\") " pod="openshift-machine-api/machine-api-operator-755bb95488-q6pjq" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.660437 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65c18850-3f68-4603-955b-12a4ab882766-serving-cert\") pod \"openshift-config-operator-5777786469-2pf7j\" (UID: \"65c18850-3f68-4603-955b-12a4ab882766\") " pod="openshift-config-operator/openshift-config-operator-5777786469-2pf7j" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.660458 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8408c5c-b382-4045-96dd-4204e71d0798-config\") pod \"kube-controller-manager-operator-69d5f845f8-flxrl\" (UID: \"b8408c5c-b382-4045-96dd-4204e71d0798\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-flxrl" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.660503 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/47a33b1f-9d8a-4a87-9d5b-15c2b36959df-metrics-certs\") pod \"network-metrics-daemon-6qncx\" (UID: \"47a33b1f-9d8a-4a87-9d5b-15c2b36959df\") " pod="openshift-multus/network-metrics-daemon-6qncx" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.660531 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/65c18850-3f68-4603-955b-12a4ab882766-available-featuregates\") pod \"openshift-config-operator-5777786469-2pf7j\" (UID: \"65c18850-3f68-4603-955b-12a4ab882766\") " pod="openshift-config-operator/openshift-config-operator-5777786469-2pf7j" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.660536 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae5457f8-56c6-47bc-86eb-87dfc7cd63c9-config\") pod \"authentication-operator-7f5c659b84-84v6d\" (UID: \"ae5457f8-56c6-47bc-86eb-87dfc7cd63c9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-84v6d" Jan 22 14:15:55 crc kubenswrapper[5099]: E0122 14:15:55.660598 5099 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Jan 22 14:15:55 crc kubenswrapper[5099]: E0122 14:15:55.660631 5099 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 14:15:55 crc kubenswrapper[5099]: E0122 14:15:55.660673 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b8408c5c-b382-4045-96dd-4204e71d0798-config podName:b8408c5c-b382-4045-96dd-4204e71d0798 nodeName:}" failed. No retries permitted until 2026-01-22 14:15:56.16065406 +0000 UTC m=+113.868404307 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/b8408c5c-b382-4045-96dd-4204e71d0798-config") pod "kube-controller-manager-operator-69d5f845f8-flxrl" (UID: "b8408c5c-b382-4045-96dd-4204e71d0798") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Jan 22 14:15:55 crc kubenswrapper[5099]: E0122 14:15:55.660701 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47a33b1f-9d8a-4a87-9d5b-15c2b36959df-metrics-certs podName:47a33b1f-9d8a-4a87-9d5b-15c2b36959df nodeName:}" failed. No retries permitted until 2026-01-22 14:16:11.660683651 +0000 UTC m=+129.368433968 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/47a33b1f-9d8a-4a87-9d5b-15c2b36959df-metrics-certs") pod "network-metrics-daemon-6qncx" (UID: "47a33b1f-9d8a-4a87-9d5b-15c2b36959df") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.661210 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae5457f8-56c6-47bc-86eb-87dfc7cd63c9-config\") pod \"authentication-operator-7f5c659b84-84v6d\" (UID: \"ae5457f8-56c6-47bc-86eb-87dfc7cd63c9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-84v6d" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.661214 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b8408c5c-b382-4045-96dd-4204e71d0798-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-flxrl\" (UID: \"b8408c5c-b382-4045-96dd-4204e71d0798\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-flxrl" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.661749 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c829ef61-cc95-4adc-88a6-433ed349112d-config\") pod \"machine-api-operator-755bb95488-q6pjq\" (UID: \"c829ef61-cc95-4adc-88a6-433ed349112d\") " pod="openshift-machine-api/machine-api-operator-755bb95488-q6pjq" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.665998 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c451b1af-aa09-4621-b237-72a053e5e347-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-g87pb\" (UID: \"c451b1af-aa09-4621-b237-72a053e5e347\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-g87pb" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.667512 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae5457f8-56c6-47bc-86eb-87dfc7cd63c9-serving-cert\") pod \"authentication-operator-7f5c659b84-84v6d\" (UID: \"ae5457f8-56c6-47bc-86eb-87dfc7cd63c9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-84v6d" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.670117 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65c18850-3f68-4603-955b-12a4ab882766-serving-cert\") pod \"openshift-config-operator-5777786469-2pf7j\" (UID: \"65c18850-3f68-4603-955b-12a4ab882766\") " pod="openshift-config-operator/openshift-config-operator-5777786469-2pf7j" Jan 22 14:15:55 crc kubenswrapper[5099]: E0122 14:15:55.677633 5099 projected.go:289] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Jan 22 14:15:55 crc kubenswrapper[5099]: E0122 14:15:55.677695 5099 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-flxrl: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Jan 22 14:15:55 crc kubenswrapper[5099]: E0122 14:15:55.677820 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b8408c5c-b382-4045-96dd-4204e71d0798-kube-api-access podName:b8408c5c-b382-4045-96dd-4204e71d0798 nodeName:}" failed. No retries permitted until 2026-01-22 14:15:56.177781585 +0000 UTC m=+113.885531862 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/b8408c5c-b382-4045-96dd-4204e71d0798-kube-api-access") pod "kube-controller-manager-operator-69d5f845f8-flxrl" (UID: "b8408c5c-b382-4045-96dd-4204e71d0798") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.680654 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c829ef61-cc95-4adc-88a6-433ed349112d-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-q6pjq\" (UID: \"c829ef61-cc95-4adc-88a6-433ed349112d\") " pod="openshift-machine-api/machine-api-operator-755bb95488-q6pjq" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.682121 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-42xhx\" (UniqueName: \"kubernetes.io/projected/ae5457f8-56c6-47bc-86eb-87dfc7cd63c9-kube-api-access-42xhx\") pod \"authentication-operator-7f5c659b84-84v6d\" (UID: \"ae5457f8-56c6-47bc-86eb-87dfc7cd63c9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-84v6d" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.682274 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnlrm\" (UniqueName: \"kubernetes.io/projected/c451b1af-aa09-4621-b237-72a053e5e347-kube-api-access-pnlrm\") pod \"ingress-operator-6b9cb4dbcf-g87pb\" (UID: \"c451b1af-aa09-4621-b237-72a053e5e347\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-g87pb" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.682557 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwrlq\" (UniqueName: \"kubernetes.io/projected/c829ef61-cc95-4adc-88a6-433ed349112d-kube-api-access-gwrlq\") pod \"machine-api-operator-755bb95488-q6pjq\" (UID: \"c829ef61-cc95-4adc-88a6-433ed349112d\") " pod="openshift-machine-api/machine-api-operator-755bb95488-q6pjq" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.685077 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v7lg\" (UniqueName: \"kubernetes.io/projected/513664b2-32c9-4290-9ae7-2400a1c4da84-kube-api-access-2v7lg\") pod \"downloads-747b44746d-ldzlj\" (UID: \"513664b2-32c9-4290-9ae7-2400a1c4da84\") " pod="openshift-console/downloads-747b44746d-ldzlj" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.689998 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c451b1af-aa09-4621-b237-72a053e5e347-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-g87pb\" (UID: \"c451b1af-aa09-4621-b237-72a053e5e347\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-g87pb" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.694933 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f2j9\" (UniqueName: \"kubernetes.io/projected/65c18850-3f68-4603-955b-12a4ab882766-kube-api-access-9f2j9\") pod \"openshift-config-operator-5777786469-2pf7j\" (UID: \"65c18850-3f68-4603-955b-12a4ab882766\") " pod="openshift-config-operator/openshift-config-operator-5777786469-2pf7j" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.712466 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-v9n8v"] Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.712702 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-rf4fl" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.713134 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-flxrl" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.717320 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.717365 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.718408 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.718531 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.718722 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.719625 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.720301 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.720479 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.756441 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-q6pjq" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.813512 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-ldzlj" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.821381 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-2pf7j" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.831326 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-84v6d" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.837496 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-g87pb" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.862844 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0fe48341-9de1-4b01-957e-16aee9a3eb75-kube-api-access\") pod \"kube-apiserver-operator-575994946d-rf4fl\" (UID: \"0fe48341-9de1-4b01-957e-16aee9a3eb75\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-rf4fl" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.862894 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0fe48341-9de1-4b01-957e-16aee9a3eb75-tmp-dir\") pod \"kube-apiserver-operator-575994946d-rf4fl\" (UID: \"0fe48341-9de1-4b01-957e-16aee9a3eb75\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-rf4fl" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.862921 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fe48341-9de1-4b01-957e-16aee9a3eb75-config\") pod \"kube-apiserver-operator-575994946d-rf4fl\" (UID: \"0fe48341-9de1-4b01-957e-16aee9a3eb75\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-rf4fl" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.863013 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fe48341-9de1-4b01-957e-16aee9a3eb75-serving-cert\") pod \"kube-apiserver-operator-575994946d-rf4fl\" (UID: \"0fe48341-9de1-4b01-957e-16aee9a3eb75\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-rf4fl" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.964224 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0fe48341-9de1-4b01-957e-16aee9a3eb75-kube-api-access\") pod \"kube-apiserver-operator-575994946d-rf4fl\" (UID: \"0fe48341-9de1-4b01-957e-16aee9a3eb75\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-rf4fl" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.964300 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0fe48341-9de1-4b01-957e-16aee9a3eb75-tmp-dir\") pod \"kube-apiserver-operator-575994946d-rf4fl\" (UID: \"0fe48341-9de1-4b01-957e-16aee9a3eb75\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-rf4fl" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.964325 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fe48341-9de1-4b01-957e-16aee9a3eb75-config\") pod \"kube-apiserver-operator-575994946d-rf4fl\" (UID: \"0fe48341-9de1-4b01-957e-16aee9a3eb75\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-rf4fl" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.964357 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fe48341-9de1-4b01-957e-16aee9a3eb75-serving-cert\") pod \"kube-apiserver-operator-575994946d-rf4fl\" (UID: \"0fe48341-9de1-4b01-957e-16aee9a3eb75\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-rf4fl" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.965435 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0fe48341-9de1-4b01-957e-16aee9a3eb75-tmp-dir\") pod \"kube-apiserver-operator-575994946d-rf4fl\" (UID: \"0fe48341-9de1-4b01-957e-16aee9a3eb75\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-rf4fl" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.966391 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fe48341-9de1-4b01-957e-16aee9a3eb75-config\") pod \"kube-apiserver-operator-575994946d-rf4fl\" (UID: \"0fe48341-9de1-4b01-957e-16aee9a3eb75\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-rf4fl" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.971298 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fe48341-9de1-4b01-957e-16aee9a3eb75-serving-cert\") pod \"kube-apiserver-operator-575994946d-rf4fl\" (UID: \"0fe48341-9de1-4b01-957e-16aee9a3eb75\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-rf4fl" Jan 22 14:15:55 crc kubenswrapper[5099]: I0122 14:15:55.985875 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0fe48341-9de1-4b01-957e-16aee9a3eb75-kube-api-access\") pod \"kube-apiserver-operator-575994946d-rf4fl\" (UID: \"0fe48341-9de1-4b01-957e-16aee9a3eb75\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-rf4fl" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.035885 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-rf4fl" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.167583 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8408c5c-b382-4045-96dd-4204e71d0798-config\") pod \"kube-controller-manager-operator-69d5f845f8-flxrl\" (UID: \"b8408c5c-b382-4045-96dd-4204e71d0798\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-flxrl" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.167715 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8408c5c-b382-4045-96dd-4204e71d0798-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-flxrl\" (UID: \"b8408c5c-b382-4045-96dd-4204e71d0798\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-flxrl" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.169548 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8408c5c-b382-4045-96dd-4204e71d0798-config\") pod \"kube-controller-manager-operator-69d5f845f8-flxrl\" (UID: \"b8408c5c-b382-4045-96dd-4204e71d0798\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-flxrl" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.178776 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8408c5c-b382-4045-96dd-4204e71d0798-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-flxrl\" (UID: \"b8408c5c-b382-4045-96dd-4204e71d0798\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-flxrl" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.184918 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-9xszn"] Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.185135 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-v9n8v" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.185373 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.185132 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.185540 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.191528 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.191921 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.191707 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.191818 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.191705 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.195473 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.208032 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.269789 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b8408c5c-b382-4045-96dd-4204e71d0798-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-flxrl\" (UID: \"b8408c5c-b382-4045-96dd-4204e71d0798\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-flxrl" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.281041 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b8408c5c-b382-4045-96dd-4204e71d0798-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-flxrl\" (UID: \"b8408c5c-b382-4045-96dd-4204e71d0798\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-flxrl" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.320494 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-8cgs9"] Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.320879 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-9xszn" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.320653 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6qncx" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.324080 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.324271 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.324303 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.327316 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.327440 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.327445 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.328227 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.328297 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.332780 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.343251 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-flxrl" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.376944 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm6s7\" (UniqueName: \"kubernetes.io/projected/94313bd3-0b8e-452e-b3b0-c549aabb8426-kube-api-access-fm6s7\") pod \"controller-manager-65b6cccf98-9xszn\" (UID: \"94313bd3-0b8e-452e-b3b0-c549aabb8426\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-9xszn" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.376997 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94313bd3-0b8e-452e-b3b0-c549aabb8426-config\") pod \"controller-manager-65b6cccf98-9xszn\" (UID: \"94313bd3-0b8e-452e-b3b0-c549aabb8426\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-9xszn" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.377019 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/94313bd3-0b8e-452e-b3b0-c549aabb8426-tmp\") pod \"controller-manager-65b6cccf98-9xszn\" (UID: \"94313bd3-0b8e-452e-b3b0-c549aabb8426\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-9xszn" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.377071 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/04ae431a-3013-4a10-95c2-3b4cb5c52cd0-tmp\") pod \"cluster-image-registry-operator-86c45576b9-v9n8v\" (UID: \"04ae431a-3013-4a10-95c2-3b4cb5c52cd0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-v9n8v" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.377097 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/04ae431a-3013-4a10-95c2-3b4cb5c52cd0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-v9n8v\" (UID: \"04ae431a-3013-4a10-95c2-3b4cb5c52cd0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-v9n8v" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.377119 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/94313bd3-0b8e-452e-b3b0-c549aabb8426-client-ca\") pod \"controller-manager-65b6cccf98-9xszn\" (UID: \"94313bd3-0b8e-452e-b3b0-c549aabb8426\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-9xszn" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.377143 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94313bd3-0b8e-452e-b3b0-c549aabb8426-serving-cert\") pod \"controller-manager-65b6cccf98-9xszn\" (UID: \"94313bd3-0b8e-452e-b3b0-c549aabb8426\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-9xszn" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.377185 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/94313bd3-0b8e-452e-b3b0-c549aabb8426-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-9xszn\" (UID: \"94313bd3-0b8e-452e-b3b0-c549aabb8426\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-9xszn" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.377247 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04ae431a-3013-4a10-95c2-3b4cb5c52cd0-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-v9n8v\" (UID: \"04ae431a-3013-4a10-95c2-3b4cb5c52cd0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-v9n8v" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.377309 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04ae431a-3013-4a10-95c2-3b4cb5c52cd0-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-v9n8v\" (UID: \"04ae431a-3013-4a10-95c2-3b4cb5c52cd0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-v9n8v" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.377341 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/04ae431a-3013-4a10-95c2-3b4cb5c52cd0-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-v9n8v\" (UID: \"04ae431a-3013-4a10-95c2-3b4cb5c52cd0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-v9n8v" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.377368 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tttcx\" (UniqueName: \"kubernetes.io/projected/04ae431a-3013-4a10-95c2-3b4cb5c52cd0-kube-api-access-tttcx\") pod \"cluster-image-registry-operator-86c45576b9-v9n8v\" (UID: \"04ae431a-3013-4a10-95c2-3b4cb5c52cd0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-v9n8v" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.479434 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04ae431a-3013-4a10-95c2-3b4cb5c52cd0-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-v9n8v\" (UID: \"04ae431a-3013-4a10-95c2-3b4cb5c52cd0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-v9n8v" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.479499 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04ae431a-3013-4a10-95c2-3b4cb5c52cd0-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-v9n8v\" (UID: \"04ae431a-3013-4a10-95c2-3b4cb5c52cd0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-v9n8v" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.479533 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/04ae431a-3013-4a10-95c2-3b4cb5c52cd0-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-v9n8v\" (UID: \"04ae431a-3013-4a10-95c2-3b4cb5c52cd0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-v9n8v" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.479581 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tttcx\" (UniqueName: \"kubernetes.io/projected/04ae431a-3013-4a10-95c2-3b4cb5c52cd0-kube-api-access-tttcx\") pod \"cluster-image-registry-operator-86c45576b9-v9n8v\" (UID: \"04ae431a-3013-4a10-95c2-3b4cb5c52cd0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-v9n8v" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.479670 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fm6s7\" (UniqueName: \"kubernetes.io/projected/94313bd3-0b8e-452e-b3b0-c549aabb8426-kube-api-access-fm6s7\") pod \"controller-manager-65b6cccf98-9xszn\" (UID: \"94313bd3-0b8e-452e-b3b0-c549aabb8426\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-9xszn" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.479693 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94313bd3-0b8e-452e-b3b0-c549aabb8426-config\") pod \"controller-manager-65b6cccf98-9xszn\" (UID: \"94313bd3-0b8e-452e-b3b0-c549aabb8426\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-9xszn" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.479713 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/94313bd3-0b8e-452e-b3b0-c549aabb8426-tmp\") pod \"controller-manager-65b6cccf98-9xszn\" (UID: \"94313bd3-0b8e-452e-b3b0-c549aabb8426\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-9xszn" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.480432 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/94313bd3-0b8e-452e-b3b0-c549aabb8426-tmp\") pod \"controller-manager-65b6cccf98-9xszn\" (UID: \"94313bd3-0b8e-452e-b3b0-c549aabb8426\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-9xszn" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.480867 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/04ae431a-3013-4a10-95c2-3b4cb5c52cd0-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-v9n8v\" (UID: \"04ae431a-3013-4a10-95c2-3b4cb5c52cd0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-v9n8v" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.481275 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-8cgs9" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.479435 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-k7dg6"] Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.481831 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/04ae431a-3013-4a10-95c2-3b4cb5c52cd0-tmp\") pod \"cluster-image-registry-operator-86c45576b9-v9n8v\" (UID: \"04ae431a-3013-4a10-95c2-3b4cb5c52cd0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-v9n8v" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.481980 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/04ae431a-3013-4a10-95c2-3b4cb5c52cd0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-v9n8v\" (UID: \"04ae431a-3013-4a10-95c2-3b4cb5c52cd0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-v9n8v" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.482041 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/94313bd3-0b8e-452e-b3b0-c549aabb8426-client-ca\") pod \"controller-manager-65b6cccf98-9xszn\" (UID: \"94313bd3-0b8e-452e-b3b0-c549aabb8426\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-9xszn" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.482092 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94313bd3-0b8e-452e-b3b0-c549aabb8426-serving-cert\") pod \"controller-manager-65b6cccf98-9xszn\" (UID: \"94313bd3-0b8e-452e-b3b0-c549aabb8426\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-9xszn" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.482099 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04ae431a-3013-4a10-95c2-3b4cb5c52cd0-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-v9n8v\" (UID: \"04ae431a-3013-4a10-95c2-3b4cb5c52cd0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-v9n8v" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.482120 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/94313bd3-0b8e-452e-b3b0-c549aabb8426-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-9xszn\" (UID: \"94313bd3-0b8e-452e-b3b0-c549aabb8426\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-9xszn" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.482149 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94313bd3-0b8e-452e-b3b0-c549aabb8426-config\") pod \"controller-manager-65b6cccf98-9xszn\" (UID: \"94313bd3-0b8e-452e-b3b0-c549aabb8426\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-9xszn" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.482774 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/04ae431a-3013-4a10-95c2-3b4cb5c52cd0-tmp\") pod \"cluster-image-registry-operator-86c45576b9-v9n8v\" (UID: \"04ae431a-3013-4a10-95c2-3b4cb5c52cd0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-v9n8v" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.483576 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/94313bd3-0b8e-452e-b3b0-c549aabb8426-client-ca\") pod \"controller-manager-65b6cccf98-9xszn\" (UID: \"94313bd3-0b8e-452e-b3b0-c549aabb8426\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-9xszn" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.483965 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/94313bd3-0b8e-452e-b3b0-c549aabb8426-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-9xszn\" (UID: \"94313bd3-0b8e-452e-b3b0-c549aabb8426\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-9xszn" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.485917 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.486568 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.486772 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.490329 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.496918 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/04ae431a-3013-4a10-95c2-3b4cb5c52cd0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-v9n8v\" (UID: \"04ae431a-3013-4a10-95c2-3b4cb5c52cd0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-v9n8v" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.497619 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.501875 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94313bd3-0b8e-452e-b3b0-c549aabb8426-serving-cert\") pod \"controller-manager-65b6cccf98-9xszn\" (UID: \"94313bd3-0b8e-452e-b3b0-c549aabb8426\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-9xszn" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.508316 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tttcx\" (UniqueName: \"kubernetes.io/projected/04ae431a-3013-4a10-95c2-3b4cb5c52cd0-kube-api-access-tttcx\") pod \"cluster-image-registry-operator-86c45576b9-v9n8v\" (UID: \"04ae431a-3013-4a10-95c2-3b4cb5c52cd0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-v9n8v" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.509000 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 22 14:15:56 crc kubenswrapper[5099]: W0122 14:15:56.510358 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65c18850_3f68_4603_955b_12a4ab882766.slice/crio-2aa3c25f2adfbd9543be3cba474ba5fdbe419fc133f21b393f91804815c8680b WatchSource:0}: Error finding container 2aa3c25f2adfbd9543be3cba474ba5fdbe419fc133f21b393f91804815c8680b: Status 404 returned error can't find the container with id 2aa3c25f2adfbd9543be3cba474ba5fdbe419fc133f21b393f91804815c8680b Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.515192 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm6s7\" (UniqueName: \"kubernetes.io/projected/94313bd3-0b8e-452e-b3b0-c549aabb8426-kube-api-access-fm6s7\") pod \"controller-manager-65b6cccf98-9xszn\" (UID: \"94313bd3-0b8e-452e-b3b0-c549aabb8426\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-9xszn" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.516954 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04ae431a-3013-4a10-95c2-3b4cb5c52cd0-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-v9n8v\" (UID: \"04ae431a-3013-4a10-95c2-3b4cb5c52cd0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-v9n8v" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.534465 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-v9n8v" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.584388 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1a399d2-6deb-4080-8525-6419d061001b-serving-cert\") pod \"console-operator-67c89758df-8cgs9\" (UID: \"b1a399d2-6deb-4080-8525-6419d061001b\") " pod="openshift-console-operator/console-operator-67c89758df-8cgs9" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.585188 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7pxb\" (UniqueName: \"kubernetes.io/projected/b1a399d2-6deb-4080-8525-6419d061001b-kube-api-access-r7pxb\") pod \"console-operator-67c89758df-8cgs9\" (UID: \"b1a399d2-6deb-4080-8525-6419d061001b\") " pod="openshift-console-operator/console-operator-67c89758df-8cgs9" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.585361 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b1a399d2-6deb-4080-8525-6419d061001b-trusted-ca\") pod \"console-operator-67c89758df-8cgs9\" (UID: \"b1a399d2-6deb-4080-8525-6419d061001b\") " pod="openshift-console-operator/console-operator-67c89758df-8cgs9" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.585411 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1a399d2-6deb-4080-8525-6419d061001b-config\") pod \"console-operator-67c89758df-8cgs9\" (UID: \"b1a399d2-6deb-4080-8525-6419d061001b\") " pod="openshift-console-operator/console-operator-67c89758df-8cgs9" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.601312 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-jpclx"] Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.611499 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-rzgsc"] Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.612229 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k7dg6" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.612631 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-jpclx" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.615285 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-xg9k5"] Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.615333 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.615535 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.615714 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.616261 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.616475 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.616650 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.616824 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.616989 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.617176 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.617409 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.617568 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484855-j94jx"] Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.617775 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-rzgsc" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.618040 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-xg9k5" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.620938 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.621306 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.622572 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.622608 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.623401 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.624805 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.625068 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.635904 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-pfh7d"] Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.637844 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-j94jx" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.640957 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.641121 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.640958 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.641449 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.642199 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.687013 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bbff495-517c-4f7c-b0e0-797cb63884c9-serving-cert\") pod \"route-controller-manager-776cdc94d6-k7dg6\" (UID: \"0bbff495-517c-4f7c-b0e0-797cb63884c9\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k7dg6" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.687060 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bbff495-517c-4f7c-b0e0-797cb63884c9-config\") pod \"route-controller-manager-776cdc94d6-k7dg6\" (UID: \"0bbff495-517c-4f7c-b0e0-797cb63884c9\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k7dg6" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.687098 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/46d7f2bf-b38a-4bea-ba85-211ffe114151-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-jpclx\" (UID: \"46d7f2bf-b38a-4bea-ba85-211ffe114151\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-jpclx" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.687114 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46d7f2bf-b38a-4bea-ba85-211ffe114151-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-jpclx\" (UID: \"46d7f2bf-b38a-4bea-ba85-211ffe114151\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-jpclx" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.687129 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0bbff495-517c-4f7c-b0e0-797cb63884c9-tmp\") pod \"route-controller-manager-776cdc94d6-k7dg6\" (UID: \"0bbff495-517c-4f7c-b0e0-797cb63884c9\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k7dg6" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.687182 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1a399d2-6deb-4080-8525-6419d061001b-serving-cert\") pod \"console-operator-67c89758df-8cgs9\" (UID: \"b1a399d2-6deb-4080-8525-6419d061001b\") " pod="openshift-console-operator/console-operator-67c89758df-8cgs9" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.687215 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5nw6\" (UniqueName: \"kubernetes.io/projected/8efd2ae7-b5de-4a8a-beb9-fcc7adcb2241-kube-api-access-t5nw6\") pod \"migrator-866fcbc849-rzgsc\" (UID: \"8efd2ae7-b5de-4a8a-beb9-fcc7adcb2241\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-rzgsc" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.687237 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r7pxb\" (UniqueName: \"kubernetes.io/projected/b1a399d2-6deb-4080-8525-6419d061001b-kube-api-access-r7pxb\") pod \"console-operator-67c89758df-8cgs9\" (UID: \"b1a399d2-6deb-4080-8525-6419d061001b\") " pod="openshift-console-operator/console-operator-67c89758df-8cgs9" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.687259 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a2c820d-64d7-48dc-845e-6fea4213ccbe-config-volume\") pod \"collect-profiles-29484855-j94jx\" (UID: \"0a2c820d-64d7-48dc-845e-6fea4213ccbe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-j94jx" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.687370 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw9vf\" (UniqueName: \"kubernetes.io/projected/0a2c820d-64d7-48dc-845e-6fea4213ccbe-kube-api-access-gw9vf\") pod \"collect-profiles-29484855-j94jx\" (UID: \"0a2c820d-64d7-48dc-845e-6fea4213ccbe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-j94jx" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.687770 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnnwb\" (UniqueName: \"kubernetes.io/projected/20945218-8d75-4f6e-ac07-6815b888e9b7-kube-api-access-xnnwb\") pod \"dns-operator-799b87ffcd-xg9k5\" (UID: \"20945218-8d75-4f6e-ac07-6815b888e9b7\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-xg9k5" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.687819 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b1a399d2-6deb-4080-8525-6419d061001b-trusted-ca\") pod \"console-operator-67c89758df-8cgs9\" (UID: \"b1a399d2-6deb-4080-8525-6419d061001b\") " pod="openshift-console-operator/console-operator-67c89758df-8cgs9" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.687879 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a2c820d-64d7-48dc-845e-6fea4213ccbe-secret-volume\") pod \"collect-profiles-29484855-j94jx\" (UID: \"0a2c820d-64d7-48dc-845e-6fea4213ccbe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-j94jx" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.687917 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1a399d2-6deb-4080-8525-6419d061001b-config\") pod \"console-operator-67c89758df-8cgs9\" (UID: \"b1a399d2-6deb-4080-8525-6419d061001b\") " pod="openshift-console-operator/console-operator-67c89758df-8cgs9" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.687986 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/46d7f2bf-b38a-4bea-ba85-211ffe114151-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-jpclx\" (UID: \"46d7f2bf-b38a-4bea-ba85-211ffe114151\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-jpclx" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.688056 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/20945218-8d75-4f6e-ac07-6815b888e9b7-metrics-tls\") pod \"dns-operator-799b87ffcd-xg9k5\" (UID: \"20945218-8d75-4f6e-ac07-6815b888e9b7\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-xg9k5" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.688266 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46d7f2bf-b38a-4bea-ba85-211ffe114151-config\") pod \"openshift-kube-scheduler-operator-54f497555d-jpclx\" (UID: \"46d7f2bf-b38a-4bea-ba85-211ffe114151\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-jpclx" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.688367 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq45m\" (UniqueName: \"kubernetes.io/projected/0bbff495-517c-4f7c-b0e0-797cb63884c9-kube-api-access-kq45m\") pod \"route-controller-manager-776cdc94d6-k7dg6\" (UID: \"0bbff495-517c-4f7c-b0e0-797cb63884c9\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k7dg6" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.688417 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0bbff495-517c-4f7c-b0e0-797cb63884c9-client-ca\") pod \"route-controller-manager-776cdc94d6-k7dg6\" (UID: \"0bbff495-517c-4f7c-b0e0-797cb63884c9\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k7dg6" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.688440 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20945218-8d75-4f6e-ac07-6815b888e9b7-tmp-dir\") pod \"dns-operator-799b87ffcd-xg9k5\" (UID: \"20945218-8d75-4f6e-ac07-6815b888e9b7\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-xg9k5" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.689130 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1a399d2-6deb-4080-8525-6419d061001b-config\") pod \"console-operator-67c89758df-8cgs9\" (UID: \"b1a399d2-6deb-4080-8525-6419d061001b\") " pod="openshift-console-operator/console-operator-67c89758df-8cgs9" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.689154 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b1a399d2-6deb-4080-8525-6419d061001b-trusted-ca\") pod \"console-operator-67c89758df-8cgs9\" (UID: \"b1a399d2-6deb-4080-8525-6419d061001b\") " pod="openshift-console-operator/console-operator-67c89758df-8cgs9" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.691951 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1a399d2-6deb-4080-8525-6419d061001b-serving-cert\") pod \"console-operator-67c89758df-8cgs9\" (UID: \"b1a399d2-6deb-4080-8525-6419d061001b\") " pod="openshift-console-operator/console-operator-67c89758df-8cgs9" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.703556 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-9xszn" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.711944 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7pxb\" (UniqueName: \"kubernetes.io/projected/b1a399d2-6deb-4080-8525-6419d061001b-kube-api-access-r7pxb\") pod \"console-operator-67c89758df-8cgs9\" (UID: \"b1a399d2-6deb-4080-8525-6419d061001b\") " pod="openshift-console-operator/console-operator-67c89758df-8cgs9" Jan 22 14:15:56 crc kubenswrapper[5099]: W0122 14:15:56.745433 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04ae431a_3013_4a10_95c2_3b4cb5c52cd0.slice/crio-9c825922345dd3259035337e65184de7c47ac599465b143f6fe73bb07349d217 WatchSource:0}: Error finding container 9c825922345dd3259035337e65184de7c47ac599465b143f6fe73bb07349d217: Status 404 returned error can't find the container with id 9c825922345dd3259035337e65184de7c47ac599465b143f6fe73bb07349d217 Jan 22 14:15:56 crc kubenswrapper[5099]: W0122 14:15:56.750009 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc451b1af_aa09_4621_b237_72a053e5e347.slice/crio-7e0aa78dfa83b00985b61af5bb8d5b7ee74d57458bb4b3649b835e387ef132a5 WatchSource:0}: Error finding container 7e0aa78dfa83b00985b61af5bb8d5b7ee74d57458bb4b3649b835e387ef132a5: Status 404 returned error can't find the container with id 7e0aa78dfa83b00985b61af5bb8d5b7ee74d57458bb4b3649b835e387ef132a5 Jan 22 14:15:56 crc kubenswrapper[5099]: W0122 14:15:56.757054 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fe48341_9de1_4b01_957e_16aee9a3eb75.slice/crio-7442b2e0999b16d8205fa263b4b78aaca7ea61c05b6643e9881901c87494624f WatchSource:0}: Error finding container 7442b2e0999b16d8205fa263b4b78aaca7ea61c05b6643e9881901c87494624f: Status 404 returned error can't find the container with id 7442b2e0999b16d8205fa263b4b78aaca7ea61c05b6643e9881901c87494624f Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.790317 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0bbff495-517c-4f7c-b0e0-797cb63884c9-client-ca\") pod \"route-controller-manager-776cdc94d6-k7dg6\" (UID: \"0bbff495-517c-4f7c-b0e0-797cb63884c9\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k7dg6" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.790366 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20945218-8d75-4f6e-ac07-6815b888e9b7-tmp-dir\") pod \"dns-operator-799b87ffcd-xg9k5\" (UID: \"20945218-8d75-4f6e-ac07-6815b888e9b7\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-xg9k5" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.790489 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bbff495-517c-4f7c-b0e0-797cb63884c9-serving-cert\") pod \"route-controller-manager-776cdc94d6-k7dg6\" (UID: \"0bbff495-517c-4f7c-b0e0-797cb63884c9\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k7dg6" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.790712 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bbff495-517c-4f7c-b0e0-797cb63884c9-config\") pod \"route-controller-manager-776cdc94d6-k7dg6\" (UID: \"0bbff495-517c-4f7c-b0e0-797cb63884c9\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k7dg6" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.790880 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/46d7f2bf-b38a-4bea-ba85-211ffe114151-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-jpclx\" (UID: \"46d7f2bf-b38a-4bea-ba85-211ffe114151\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-jpclx" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.790926 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46d7f2bf-b38a-4bea-ba85-211ffe114151-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-jpclx\" (UID: \"46d7f2bf-b38a-4bea-ba85-211ffe114151\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-jpclx" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.790955 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0bbff495-517c-4f7c-b0e0-797cb63884c9-tmp\") pod \"route-controller-manager-776cdc94d6-k7dg6\" (UID: \"0bbff495-517c-4f7c-b0e0-797cb63884c9\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k7dg6" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.791059 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t5nw6\" (UniqueName: \"kubernetes.io/projected/8efd2ae7-b5de-4a8a-beb9-fcc7adcb2241-kube-api-access-t5nw6\") pod \"migrator-866fcbc849-rzgsc\" (UID: \"8efd2ae7-b5de-4a8a-beb9-fcc7adcb2241\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-rzgsc" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.791093 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20945218-8d75-4f6e-ac07-6815b888e9b7-tmp-dir\") pod \"dns-operator-799b87ffcd-xg9k5\" (UID: \"20945218-8d75-4f6e-ac07-6815b888e9b7\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-xg9k5" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.791114 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a2c820d-64d7-48dc-845e-6fea4213ccbe-config-volume\") pod \"collect-profiles-29484855-j94jx\" (UID: \"0a2c820d-64d7-48dc-845e-6fea4213ccbe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-j94jx" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.791199 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gw9vf\" (UniqueName: \"kubernetes.io/projected/0a2c820d-64d7-48dc-845e-6fea4213ccbe-kube-api-access-gw9vf\") pod \"collect-profiles-29484855-j94jx\" (UID: \"0a2c820d-64d7-48dc-845e-6fea4213ccbe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-j94jx" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.791297 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xnnwb\" (UniqueName: \"kubernetes.io/projected/20945218-8d75-4f6e-ac07-6815b888e9b7-kube-api-access-xnnwb\") pod \"dns-operator-799b87ffcd-xg9k5\" (UID: \"20945218-8d75-4f6e-ac07-6815b888e9b7\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-xg9k5" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.791330 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a2c820d-64d7-48dc-845e-6fea4213ccbe-secret-volume\") pod \"collect-profiles-29484855-j94jx\" (UID: \"0a2c820d-64d7-48dc-845e-6fea4213ccbe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-j94jx" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.791354 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/46d7f2bf-b38a-4bea-ba85-211ffe114151-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-jpclx\" (UID: \"46d7f2bf-b38a-4bea-ba85-211ffe114151\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-jpclx" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.791365 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0bbff495-517c-4f7c-b0e0-797cb63884c9-client-ca\") pod \"route-controller-manager-776cdc94d6-k7dg6\" (UID: \"0bbff495-517c-4f7c-b0e0-797cb63884c9\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k7dg6" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.792068 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0bbff495-517c-4f7c-b0e0-797cb63884c9-tmp\") pod \"route-controller-manager-776cdc94d6-k7dg6\" (UID: \"0bbff495-517c-4f7c-b0e0-797cb63884c9\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k7dg6" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.791382 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/20945218-8d75-4f6e-ac07-6815b888e9b7-metrics-tls\") pod \"dns-operator-799b87ffcd-xg9k5\" (UID: \"20945218-8d75-4f6e-ac07-6815b888e9b7\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-xg9k5" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.792376 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46d7f2bf-b38a-4bea-ba85-211ffe114151-config\") pod \"openshift-kube-scheduler-operator-54f497555d-jpclx\" (UID: \"46d7f2bf-b38a-4bea-ba85-211ffe114151\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-jpclx" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.792414 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kq45m\" (UniqueName: \"kubernetes.io/projected/0bbff495-517c-4f7c-b0e0-797cb63884c9-kube-api-access-kq45m\") pod \"route-controller-manager-776cdc94d6-k7dg6\" (UID: \"0bbff495-517c-4f7c-b0e0-797cb63884c9\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k7dg6" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.792728 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bbff495-517c-4f7c-b0e0-797cb63884c9-config\") pod \"route-controller-manager-776cdc94d6-k7dg6\" (UID: \"0bbff495-517c-4f7c-b0e0-797cb63884c9\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k7dg6" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.792959 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a2c820d-64d7-48dc-845e-6fea4213ccbe-config-volume\") pod \"collect-profiles-29484855-j94jx\" (UID: \"0a2c820d-64d7-48dc-845e-6fea4213ccbe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-j94jx" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.793082 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/46d7f2bf-b38a-4bea-ba85-211ffe114151-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-jpclx\" (UID: \"46d7f2bf-b38a-4bea-ba85-211ffe114151\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-jpclx" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.793311 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46d7f2bf-b38a-4bea-ba85-211ffe114151-config\") pod \"openshift-kube-scheduler-operator-54f497555d-jpclx\" (UID: \"46d7f2bf-b38a-4bea-ba85-211ffe114151\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-jpclx" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.797653 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46d7f2bf-b38a-4bea-ba85-211ffe114151-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-jpclx\" (UID: \"46d7f2bf-b38a-4bea-ba85-211ffe114151\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-jpclx" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.799759 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a2c820d-64d7-48dc-845e-6fea4213ccbe-secret-volume\") pod \"collect-profiles-29484855-j94jx\" (UID: \"0a2c820d-64d7-48dc-845e-6fea4213ccbe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-j94jx" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.802621 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bbff495-517c-4f7c-b0e0-797cb63884c9-serving-cert\") pod \"route-controller-manager-776cdc94d6-k7dg6\" (UID: \"0bbff495-517c-4f7c-b0e0-797cb63884c9\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k7dg6" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.810437 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/20945218-8d75-4f6e-ac07-6815b888e9b7-metrics-tls\") pod \"dns-operator-799b87ffcd-xg9k5\" (UID: \"20945218-8d75-4f6e-ac07-6815b888e9b7\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-xg9k5" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.813733 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnnwb\" (UniqueName: \"kubernetes.io/projected/20945218-8d75-4f6e-ac07-6815b888e9b7-kube-api-access-xnnwb\") pod \"dns-operator-799b87ffcd-xg9k5\" (UID: \"20945218-8d75-4f6e-ac07-6815b888e9b7\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-xg9k5" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.817409 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq45m\" (UniqueName: \"kubernetes.io/projected/0bbff495-517c-4f7c-b0e0-797cb63884c9-kube-api-access-kq45m\") pod \"route-controller-manager-776cdc94d6-k7dg6\" (UID: \"0bbff495-517c-4f7c-b0e0-797cb63884c9\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k7dg6" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.817640 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-8cgs9" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.817830 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw9vf\" (UniqueName: \"kubernetes.io/projected/0a2c820d-64d7-48dc-845e-6fea4213ccbe-kube-api-access-gw9vf\") pod \"collect-profiles-29484855-j94jx\" (UID: \"0a2c820d-64d7-48dc-845e-6fea4213ccbe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-j94jx" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.822073 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/46d7f2bf-b38a-4bea-ba85-211ffe114151-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-jpclx\" (UID: \"46d7f2bf-b38a-4bea-ba85-211ffe114151\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-jpclx" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.825601 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5nw6\" (UniqueName: \"kubernetes.io/projected/8efd2ae7-b5de-4a8a-beb9-fcc7adcb2241-kube-api-access-t5nw6\") pod \"migrator-866fcbc849-rzgsc\" (UID: \"8efd2ae7-b5de-4a8a-beb9-fcc7adcb2241\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-rzgsc" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.939569 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-cm8k9"] Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.939801 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.949474 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.949607 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.949897 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.950353 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.952152 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.952490 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.952777 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.952874 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-jvzxb"] Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.953977 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.954006 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.954088 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.954093 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.954263 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-cm8k9" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.954302 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.954918 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k7dg6" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.952898 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.959274 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.959526 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-jpclx" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.959762 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.959776 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.959965 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.960487 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.960636 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.960790 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.960955 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.965377 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.965672 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.966255 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-jvzxb" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.967123 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.968208 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.969009 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.969750 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.969849 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.970618 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-zrclr"] Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.975450 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.996130 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f258327b-5062-4466-bd24-cc22c2f56087-etcd-serving-ca\") pod \"apiserver-8596bd845d-cm8k9\" (UID: \"f258327b-5062-4466-bd24-cc22c2f56087\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-cm8k9" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.996200 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.996277 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee755aa6-a943-4869-a688-c0da5d38aafa-config\") pod \"kube-storage-version-migrator-operator-565b79b866-jvzxb\" (UID: \"ee755aa6-a943-4869-a688-c0da5d38aafa\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-jvzxb" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.996315 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npn2w\" (UniqueName: \"kubernetes.io/projected/f258327b-5062-4466-bd24-cc22c2f56087-kube-api-access-npn2w\") pod \"apiserver-8596bd845d-cm8k9\" (UID: \"f258327b-5062-4466-bd24-cc22c2f56087\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-cm8k9" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.996338 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f258327b-5062-4466-bd24-cc22c2f56087-serving-cert\") pod \"apiserver-8596bd845d-cm8k9\" (UID: \"f258327b-5062-4466-bd24-cc22c2f56087\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-cm8k9" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.996372 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee755aa6-a943-4869-a688-c0da5d38aafa-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-jvzxb\" (UID: \"ee755aa6-a943-4869-a688-c0da5d38aafa\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-jvzxb" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.996394 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.996416 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.996456 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt77j\" (UniqueName: \"kubernetes.io/projected/7894c17b-6de7-426e-b27a-4834b7186e8f-kube-api-access-xt77j\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.996480 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7894c17b-6de7-426e-b27a-4834b7186e8f-audit-policies\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.996499 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.996520 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f258327b-5062-4466-bd24-cc22c2f56087-audit-policies\") pod \"apiserver-8596bd845d-cm8k9\" (UID: \"f258327b-5062-4466-bd24-cc22c2f56087\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-cm8k9" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.996541 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.996564 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlhl2\" (UniqueName: \"kubernetes.io/projected/ee755aa6-a943-4869-a688-c0da5d38aafa-kube-api-access-mlhl2\") pod \"kube-storage-version-migrator-operator-565b79b866-jvzxb\" (UID: \"ee755aa6-a943-4869-a688-c0da5d38aafa\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-jvzxb" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.996580 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f258327b-5062-4466-bd24-cc22c2f56087-audit-dir\") pod \"apiserver-8596bd845d-cm8k9\" (UID: \"f258327b-5062-4466-bd24-cc22c2f56087\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-cm8k9" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.996596 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7894c17b-6de7-426e-b27a-4834b7186e8f-audit-dir\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.996617 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.996654 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.996675 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f258327b-5062-4466-bd24-cc22c2f56087-trusted-ca-bundle\") pod \"apiserver-8596bd845d-cm8k9\" (UID: \"f258327b-5062-4466-bd24-cc22c2f56087\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-cm8k9" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.996696 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f258327b-5062-4466-bd24-cc22c2f56087-encryption-config\") pod \"apiserver-8596bd845d-cm8k9\" (UID: \"f258327b-5062-4466-bd24-cc22c2f56087\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-cm8k9" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.996724 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.996744 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.996765 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.996783 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:56 crc kubenswrapper[5099]: I0122 14:15:56.996812 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f258327b-5062-4466-bd24-cc22c2f56087-etcd-client\") pod \"apiserver-8596bd845d-cm8k9\" (UID: \"f258327b-5062-4466-bd24-cc22c2f56087\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-cm8k9" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.038307 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-b798r"] Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.038490 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-zrclr" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.039907 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-xg9k5" Jan 22 14:15:57 crc kubenswrapper[5099]: W0122 14:15:57.041364 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1a399d2_6deb_4080_8525_6419d061001b.slice/crio-bdffbe98dcf955fff4de07c45914077ca1eb424147110c66fea796cfa046366e WatchSource:0}: Error finding container bdffbe98dcf955fff4de07c45914077ca1eb424147110c66fea796cfa046366e: Status 404 returned error can't find the container with id bdffbe98dcf955fff4de07c45914077ca1eb424147110c66fea796cfa046366e Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.041644 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.043035 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-rzgsc" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.050434 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.056444 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-w89ff"] Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.063758 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-67tlp"] Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.064105 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-b798r" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.064743 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-w89ff" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.067005 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-j94jx" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.068429 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-ldzlj"] Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.068466 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-rwsnz"] Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.072534 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.072870 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.073153 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.073361 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.073486 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.073624 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.073824 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.073981 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.074156 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.074417 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.098075 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f258327b-5062-4466-bd24-cc22c2f56087-encryption-config\") pod \"apiserver-8596bd845d-cm8k9\" (UID: \"f258327b-5062-4466-bd24-cc22c2f56087\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-cm8k9" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.098130 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.098178 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.098202 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.098222 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.098252 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlkkl\" (UniqueName: \"kubernetes.io/projected/3e9d436d-36cf-4f6b-bbd3-6f1931e7228c-kube-api-access-nlkkl\") pod \"multus-admission-controller-69db94689b-zrclr\" (UID: \"3e9d436d-36cf-4f6b-bbd3-6f1931e7228c\") " pod="openshift-multus/multus-admission-controller-69db94689b-zrclr" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.098290 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f258327b-5062-4466-bd24-cc22c2f56087-etcd-client\") pod \"apiserver-8596bd845d-cm8k9\" (UID: \"f258327b-5062-4466-bd24-cc22c2f56087\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-cm8k9" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.098314 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f258327b-5062-4466-bd24-cc22c2f56087-etcd-serving-ca\") pod \"apiserver-8596bd845d-cm8k9\" (UID: \"f258327b-5062-4466-bd24-cc22c2f56087\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-cm8k9" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.098336 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.098366 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zxqn\" (UniqueName: \"kubernetes.io/projected/d69099b1-1e3d-4007-b1d1-039d6df91bb7-kube-api-access-4zxqn\") pod \"packageserver-7d4fc7d867-67tlp\" (UID: \"d69099b1-1e3d-4007-b1d1-039d6df91bb7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-67tlp" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.098389 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec74add6-7b16-4c96-aee1-336b53788c2a-config\") pod \"etcd-operator-69b85846b6-w89ff\" (UID: \"ec74add6-7b16-4c96-aee1-336b53788c2a\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-w89ff" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.098427 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee755aa6-a943-4869-a688-c0da5d38aafa-config\") pod \"kube-storage-version-migrator-operator-565b79b866-jvzxb\" (UID: \"ee755aa6-a943-4869-a688-c0da5d38aafa\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-jvzxb" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.098462 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d69099b1-1e3d-4007-b1d1-039d6df91bb7-tmpfs\") pod \"packageserver-7d4fc7d867-67tlp\" (UID: \"d69099b1-1e3d-4007-b1d1-039d6df91bb7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-67tlp" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.098487 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ec74add6-7b16-4c96-aee1-336b53788c2a-etcd-service-ca\") pod \"etcd-operator-69b85846b6-w89ff\" (UID: \"ec74add6-7b16-4c96-aee1-336b53788c2a\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-w89ff" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.098522 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-npn2w\" (UniqueName: \"kubernetes.io/projected/f258327b-5062-4466-bd24-cc22c2f56087-kube-api-access-npn2w\") pod \"apiserver-8596bd845d-cm8k9\" (UID: \"f258327b-5062-4466-bd24-cc22c2f56087\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-cm8k9" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.098547 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6rsj\" (UniqueName: \"kubernetes.io/projected/ea2249f7-4927-4920-9ce2-aaa3cc5749ba-kube-api-access-s6rsj\") pod \"package-server-manager-77f986bd66-b798r\" (UID: \"ea2249f7-4927-4920-9ce2-aaa3cc5749ba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-b798r" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.098584 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f258327b-5062-4466-bd24-cc22c2f56087-serving-cert\") pod \"apiserver-8596bd845d-cm8k9\" (UID: \"f258327b-5062-4466-bd24-cc22c2f56087\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-cm8k9" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.098715 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea2249f7-4927-4920-9ce2-aaa3cc5749ba-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-b798r\" (UID: \"ea2249f7-4927-4920-9ce2-aaa3cc5749ba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-b798r" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.098823 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee755aa6-a943-4869-a688-c0da5d38aafa-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-jvzxb\" (UID: \"ee755aa6-a943-4869-a688-c0da5d38aafa\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-jvzxb" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.098855 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.098883 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec74add6-7b16-4c96-aee1-336b53788c2a-serving-cert\") pod \"etcd-operator-69b85846b6-w89ff\" (UID: \"ec74add6-7b16-4c96-aee1-336b53788c2a\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-w89ff" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.098902 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmmh9\" (UniqueName: \"kubernetes.io/projected/ec74add6-7b16-4c96-aee1-336b53788c2a-kube-api-access-zmmh9\") pod \"etcd-operator-69b85846b6-w89ff\" (UID: \"ec74add6-7b16-4c96-aee1-336b53788c2a\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-w89ff" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.098926 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.098981 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xt77j\" (UniqueName: \"kubernetes.io/projected/7894c17b-6de7-426e-b27a-4834b7186e8f-kube-api-access-xt77j\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.099000 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d69099b1-1e3d-4007-b1d1-039d6df91bb7-apiservice-cert\") pod \"packageserver-7d4fc7d867-67tlp\" (UID: \"d69099b1-1e3d-4007-b1d1-039d6df91bb7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-67tlp" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.099030 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7894c17b-6de7-426e-b27a-4834b7186e8f-audit-policies\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.099054 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.099108 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d69099b1-1e3d-4007-b1d1-039d6df91bb7-webhook-cert\") pod \"packageserver-7d4fc7d867-67tlp\" (UID: \"d69099b1-1e3d-4007-b1d1-039d6df91bb7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-67tlp" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.099130 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ec74add6-7b16-4c96-aee1-336b53788c2a-etcd-client\") pod \"etcd-operator-69b85846b6-w89ff\" (UID: \"ec74add6-7b16-4c96-aee1-336b53788c2a\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-w89ff" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.099186 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f258327b-5062-4466-bd24-cc22c2f56087-audit-policies\") pod \"apiserver-8596bd845d-cm8k9\" (UID: \"f258327b-5062-4466-bd24-cc22c2f56087\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-cm8k9" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.099209 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ec74add6-7b16-4c96-aee1-336b53788c2a-etcd-ca\") pod \"etcd-operator-69b85846b6-w89ff\" (UID: \"ec74add6-7b16-4c96-aee1-336b53788c2a\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-w89ff" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.099249 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.099269 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3e9d436d-36cf-4f6b-bbd3-6f1931e7228c-webhook-certs\") pod \"multus-admission-controller-69db94689b-zrclr\" (UID: \"3e9d436d-36cf-4f6b-bbd3-6f1931e7228c\") " pod="openshift-multus/multus-admission-controller-69db94689b-zrclr" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.099333 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mlhl2\" (UniqueName: \"kubernetes.io/projected/ee755aa6-a943-4869-a688-c0da5d38aafa-kube-api-access-mlhl2\") pod \"kube-storage-version-migrator-operator-565b79b866-jvzxb\" (UID: \"ee755aa6-a943-4869-a688-c0da5d38aafa\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-jvzxb" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.099354 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f258327b-5062-4466-bd24-cc22c2f56087-audit-dir\") pod \"apiserver-8596bd845d-cm8k9\" (UID: \"f258327b-5062-4466-bd24-cc22c2f56087\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-cm8k9" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.099371 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7894c17b-6de7-426e-b27a-4834b7186e8f-audit-dir\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.099389 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ec74add6-7b16-4c96-aee1-336b53788c2a-tmp-dir\") pod \"etcd-operator-69b85846b6-w89ff\" (UID: \"ec74add6-7b16-4c96-aee1-336b53788c2a\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-w89ff" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.099432 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.099452 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.099490 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f258327b-5062-4466-bd24-cc22c2f56087-trusted-ca-bundle\") pod \"apiserver-8596bd845d-cm8k9\" (UID: \"f258327b-5062-4466-bd24-cc22c2f56087\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-cm8k9" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.099816 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f258327b-5062-4466-bd24-cc22c2f56087-etcd-serving-ca\") pod \"apiserver-8596bd845d-cm8k9\" (UID: \"f258327b-5062-4466-bd24-cc22c2f56087\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-cm8k9" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.100070 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f258327b-5062-4466-bd24-cc22c2f56087-trusted-ca-bundle\") pod \"apiserver-8596bd845d-cm8k9\" (UID: \"f258327b-5062-4466-bd24-cc22c2f56087\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-cm8k9" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.101132 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee755aa6-a943-4869-a688-c0da5d38aafa-config\") pod \"kube-storage-version-migrator-operator-565b79b866-jvzxb\" (UID: \"ee755aa6-a943-4869-a688-c0da5d38aafa\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-jvzxb" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.105754 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.106737 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7894c17b-6de7-426e-b27a-4834b7186e8f-audit-policies\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.110088 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.110453 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f258327b-5062-4466-bd24-cc22c2f56087-serving-cert\") pod \"apiserver-8596bd845d-cm8k9\" (UID: \"f258327b-5062-4466-bd24-cc22c2f56087\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-cm8k9" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.110975 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.111928 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.112003 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.112097 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.112532 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f258327b-5062-4466-bd24-cc22c2f56087-etcd-client\") pod \"apiserver-8596bd845d-cm8k9\" (UID: \"f258327b-5062-4466-bd24-cc22c2f56087\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-cm8k9" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.112901 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7894c17b-6de7-426e-b27a-4834b7186e8f-audit-dir\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.112941 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f258327b-5062-4466-bd24-cc22c2f56087-audit-dir\") pod \"apiserver-8596bd845d-cm8k9\" (UID: \"f258327b-5062-4466-bd24-cc22c2f56087\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-cm8k9" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.113047 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f258327b-5062-4466-bd24-cc22c2f56087-audit-policies\") pod \"apiserver-8596bd845d-cm8k9\" (UID: \"f258327b-5062-4466-bd24-cc22c2f56087\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-cm8k9" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.115252 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f258327b-5062-4466-bd24-cc22c2f56087-encryption-config\") pod \"apiserver-8596bd845d-cm8k9\" (UID: \"f258327b-5062-4466-bd24-cc22c2f56087\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-cm8k9" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.116879 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee755aa6-a943-4869-a688-c0da5d38aafa-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-jvzxb\" (UID: \"ee755aa6-a943-4869-a688-c0da5d38aafa\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-jvzxb" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.117744 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.119929 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-gflg8"] Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.121355 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-67tlp" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.121491 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.121651 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-rwsnz" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.124767 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.124988 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.125005 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.125077 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.126441 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.126537 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.129830 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-npn2w\" (UniqueName: \"kubernetes.io/projected/f258327b-5062-4466-bd24-cc22c2f56087-kube-api-access-npn2w\") pod \"apiserver-8596bd845d-cm8k9\" (UID: \"f258327b-5062-4466-bd24-cc22c2f56087\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-cm8k9" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.133556 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.136711 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.144489 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlhl2\" (UniqueName: \"kubernetes.io/projected/ee755aa6-a943-4869-a688-c0da5d38aafa-kube-api-access-mlhl2\") pod \"kube-storage-version-migrator-operator-565b79b866-jvzxb\" (UID: \"ee755aa6-a943-4869-a688-c0da5d38aafa\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-jvzxb" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.144744 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt77j\" (UniqueName: \"kubernetes.io/projected/7894c17b-6de7-426e-b27a-4834b7186e8f-kube-api-access-xt77j\") pod \"oauth-openshift-66458b6674-pfh7d\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.156031 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.199264 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-cm8k9" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.199687 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-rrp2m"] Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.200266 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s6rsj\" (UniqueName: \"kubernetes.io/projected/ea2249f7-4927-4920-9ce2-aaa3cc5749ba-kube-api-access-s6rsj\") pod \"package-server-manager-77f986bd66-b798r\" (UID: \"ea2249f7-4927-4920-9ce2-aaa3cc5749ba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-b798r" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.200538 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea2249f7-4927-4920-9ce2-aaa3cc5749ba-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-b798r\" (UID: \"ea2249f7-4927-4920-9ce2-aaa3cc5749ba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-b798r" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.200598 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec74add6-7b16-4c96-aee1-336b53788c2a-serving-cert\") pod \"etcd-operator-69b85846b6-w89ff\" (UID: \"ec74add6-7b16-4c96-aee1-336b53788c2a\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-w89ff" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.200619 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zmmh9\" (UniqueName: \"kubernetes.io/projected/ec74add6-7b16-4c96-aee1-336b53788c2a-kube-api-access-zmmh9\") pod \"etcd-operator-69b85846b6-w89ff\" (UID: \"ec74add6-7b16-4c96-aee1-336b53788c2a\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-w89ff" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.200685 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d69099b1-1e3d-4007-b1d1-039d6df91bb7-apiservice-cert\") pod \"packageserver-7d4fc7d867-67tlp\" (UID: \"d69099b1-1e3d-4007-b1d1-039d6df91bb7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-67tlp" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.200730 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d69099b1-1e3d-4007-b1d1-039d6df91bb7-webhook-cert\") pod \"packageserver-7d4fc7d867-67tlp\" (UID: \"d69099b1-1e3d-4007-b1d1-039d6df91bb7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-67tlp" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.200746 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ec74add6-7b16-4c96-aee1-336b53788c2a-etcd-client\") pod \"etcd-operator-69b85846b6-w89ff\" (UID: \"ec74add6-7b16-4c96-aee1-336b53788c2a\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-w89ff" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.200765 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ec74add6-7b16-4c96-aee1-336b53788c2a-etcd-ca\") pod \"etcd-operator-69b85846b6-w89ff\" (UID: \"ec74add6-7b16-4c96-aee1-336b53788c2a\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-w89ff" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.200801 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3e9d436d-36cf-4f6b-bbd3-6f1931e7228c-webhook-certs\") pod \"multus-admission-controller-69db94689b-zrclr\" (UID: \"3e9d436d-36cf-4f6b-bbd3-6f1931e7228c\") " pod="openshift-multus/multus-admission-controller-69db94689b-zrclr" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.200837 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ec74add6-7b16-4c96-aee1-336b53788c2a-tmp-dir\") pod \"etcd-operator-69b85846b6-w89ff\" (UID: \"ec74add6-7b16-4c96-aee1-336b53788c2a\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-w89ff" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.200857 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d187979f-bd07-4330-bc66-d4e12f068dda-signing-key\") pod \"service-ca-74545575db-rwsnz\" (UID: \"d187979f-bd07-4330-bc66-d4e12f068dda\") " pod="openshift-service-ca/service-ca-74545575db-rwsnz" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.200879 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d187979f-bd07-4330-bc66-d4e12f068dda-signing-cabundle\") pod \"service-ca-74545575db-rwsnz\" (UID: \"d187979f-bd07-4330-bc66-d4e12f068dda\") " pod="openshift-service-ca/service-ca-74545575db-rwsnz" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.200988 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nlkkl\" (UniqueName: \"kubernetes.io/projected/3e9d436d-36cf-4f6b-bbd3-6f1931e7228c-kube-api-access-nlkkl\") pod \"multus-admission-controller-69db94689b-zrclr\" (UID: \"3e9d436d-36cf-4f6b-bbd3-6f1931e7228c\") " pod="openshift-multus/multus-admission-controller-69db94689b-zrclr" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.201034 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4zxqn\" (UniqueName: \"kubernetes.io/projected/d69099b1-1e3d-4007-b1d1-039d6df91bb7-kube-api-access-4zxqn\") pod \"packageserver-7d4fc7d867-67tlp\" (UID: \"d69099b1-1e3d-4007-b1d1-039d6df91bb7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-67tlp" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.201059 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec74add6-7b16-4c96-aee1-336b53788c2a-config\") pod \"etcd-operator-69b85846b6-w89ff\" (UID: \"ec74add6-7b16-4c96-aee1-336b53788c2a\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-w89ff" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.201078 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdjmg\" (UniqueName: \"kubernetes.io/projected/d187979f-bd07-4330-bc66-d4e12f068dda-kube-api-access-kdjmg\") pod \"service-ca-74545575db-rwsnz\" (UID: \"d187979f-bd07-4330-bc66-d4e12f068dda\") " pod="openshift-service-ca/service-ca-74545575db-rwsnz" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.201106 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d69099b1-1e3d-4007-b1d1-039d6df91bb7-tmpfs\") pod \"packageserver-7d4fc7d867-67tlp\" (UID: \"d69099b1-1e3d-4007-b1d1-039d6df91bb7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-67tlp" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.201123 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ec74add6-7b16-4c96-aee1-336b53788c2a-etcd-service-ca\") pod \"etcd-operator-69b85846b6-w89ff\" (UID: \"ec74add6-7b16-4c96-aee1-336b53788c2a\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-w89ff" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.202500 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ec74add6-7b16-4c96-aee1-336b53788c2a-tmp-dir\") pod \"etcd-operator-69b85846b6-w89ff\" (UID: \"ec74add6-7b16-4c96-aee1-336b53788c2a\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-w89ff" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.205311 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ec74add6-7b16-4c96-aee1-336b53788c2a-etcd-service-ca\") pod \"etcd-operator-69b85846b6-w89ff\" (UID: \"ec74add6-7b16-4c96-aee1-336b53788c2a\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-w89ff" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.212572 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-jvzxb" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.212676 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec74add6-7b16-4c96-aee1-336b53788c2a-config\") pod \"etcd-operator-69b85846b6-w89ff\" (UID: \"ec74add6-7b16-4c96-aee1-336b53788c2a\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-w89ff" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.213481 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ec74add6-7b16-4c96-aee1-336b53788c2a-etcd-ca\") pod \"etcd-operator-69b85846b6-w89ff\" (UID: \"ec74add6-7b16-4c96-aee1-336b53788c2a\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-w89ff" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.213943 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d69099b1-1e3d-4007-b1d1-039d6df91bb7-tmpfs\") pod \"packageserver-7d4fc7d867-67tlp\" (UID: \"d69099b1-1e3d-4007-b1d1-039d6df91bb7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-67tlp" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.215055 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3e9d436d-36cf-4f6b-bbd3-6f1931e7228c-webhook-certs\") pod \"multus-admission-controller-69db94689b-zrclr\" (UID: \"3e9d436d-36cf-4f6b-bbd3-6f1931e7228c\") " pod="openshift-multus/multus-admission-controller-69db94689b-zrclr" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.216118 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea2249f7-4927-4920-9ce2-aaa3cc5749ba-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-b798r\" (UID: \"ea2249f7-4927-4920-9ce2-aaa3cc5749ba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-b798r" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.217970 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d69099b1-1e3d-4007-b1d1-039d6df91bb7-apiservice-cert\") pod \"packageserver-7d4fc7d867-67tlp\" (UID: \"d69099b1-1e3d-4007-b1d1-039d6df91bb7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-67tlp" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.219005 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d69099b1-1e3d-4007-b1d1-039d6df91bb7-webhook-cert\") pod \"packageserver-7d4fc7d867-67tlp\" (UID: \"d69099b1-1e3d-4007-b1d1-039d6df91bb7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-67tlp" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.220929 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec74add6-7b16-4c96-aee1-336b53788c2a-serving-cert\") pod \"etcd-operator-69b85846b6-w89ff\" (UID: \"ec74add6-7b16-4c96-aee1-336b53788c2a\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-w89ff" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.229826 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ec74add6-7b16-4c96-aee1-336b53788c2a-etcd-client\") pod \"etcd-operator-69b85846b6-w89ff\" (UID: \"ec74add6-7b16-4c96-aee1-336b53788c2a\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-w89ff" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.236139 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6rsj\" (UniqueName: \"kubernetes.io/projected/ea2249f7-4927-4920-9ce2-aaa3cc5749ba-kube-api-access-s6rsj\") pod \"package-server-manager-77f986bd66-b798r\" (UID: \"ea2249f7-4927-4920-9ce2-aaa3cc5749ba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-b798r" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.252605 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmmh9\" (UniqueName: \"kubernetes.io/projected/ec74add6-7b16-4c96-aee1-336b53788c2a-kube-api-access-zmmh9\") pod \"etcd-operator-69b85846b6-w89ff\" (UID: \"ec74add6-7b16-4c96-aee1-336b53788c2a\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-w89ff" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.253490 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-b798r" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.267730 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-w89ff" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.314706 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zxqn\" (UniqueName: \"kubernetes.io/projected/d69099b1-1e3d-4007-b1d1-039d6df91bb7-kube-api-access-4zxqn\") pod \"packageserver-7d4fc7d867-67tlp\" (UID: \"d69099b1-1e3d-4007-b1d1-039d6df91bb7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-67tlp" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.321504 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e07e6e00-cfcc-4513-b231-8d27833d8687-trusted-ca-bundle\") pod \"console-64d44f6ddf-gflg8\" (UID: \"e07e6e00-cfcc-4513-b231-8d27833d8687\") " pod="openshift-console/console-64d44f6ddf-gflg8" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.321628 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kdjmg\" (UniqueName: \"kubernetes.io/projected/d187979f-bd07-4330-bc66-d4e12f068dda-kube-api-access-kdjmg\") pod \"service-ca-74545575db-rwsnz\" (UID: \"d187979f-bd07-4330-bc66-d4e12f068dda\") " pod="openshift-service-ca/service-ca-74545575db-rwsnz" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.321702 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e07e6e00-cfcc-4513-b231-8d27833d8687-console-oauth-config\") pod \"console-64d44f6ddf-gflg8\" (UID: \"e07e6e00-cfcc-4513-b231-8d27833d8687\") " pod="openshift-console/console-64d44f6ddf-gflg8" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.326171 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e07e6e00-cfcc-4513-b231-8d27833d8687-console-config\") pod \"console-64d44f6ddf-gflg8\" (UID: \"e07e6e00-cfcc-4513-b231-8d27833d8687\") " pod="openshift-console/console-64d44f6ddf-gflg8" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.326249 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zg9n\" (UniqueName: \"kubernetes.io/projected/e07e6e00-cfcc-4513-b231-8d27833d8687-kube-api-access-6zg9n\") pod \"console-64d44f6ddf-gflg8\" (UID: \"e07e6e00-cfcc-4513-b231-8d27833d8687\") " pod="openshift-console/console-64d44f6ddf-gflg8" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.326470 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e07e6e00-cfcc-4513-b231-8d27833d8687-console-serving-cert\") pod \"console-64d44f6ddf-gflg8\" (UID: \"e07e6e00-cfcc-4513-b231-8d27833d8687\") " pod="openshift-console/console-64d44f6ddf-gflg8" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.326569 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d187979f-bd07-4330-bc66-d4e12f068dda-signing-key\") pod \"service-ca-74545575db-rwsnz\" (UID: \"d187979f-bd07-4330-bc66-d4e12f068dda\") " pod="openshift-service-ca/service-ca-74545575db-rwsnz" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.326602 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d187979f-bd07-4330-bc66-d4e12f068dda-signing-cabundle\") pod \"service-ca-74545575db-rwsnz\" (UID: \"d187979f-bd07-4330-bc66-d4e12f068dda\") " pod="openshift-service-ca/service-ca-74545575db-rwsnz" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.326628 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e07e6e00-cfcc-4513-b231-8d27833d8687-service-ca\") pod \"console-64d44f6ddf-gflg8\" (UID: \"e07e6e00-cfcc-4513-b231-8d27833d8687\") " pod="openshift-console/console-64d44f6ddf-gflg8" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.326650 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e07e6e00-cfcc-4513-b231-8d27833d8687-oauth-serving-cert\") pod \"console-64d44f6ddf-gflg8\" (UID: \"e07e6e00-cfcc-4513-b231-8d27833d8687\") " pod="openshift-console/console-64d44f6ddf-gflg8" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.332220 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlkkl\" (UniqueName: \"kubernetes.io/projected/3e9d436d-36cf-4f6b-bbd3-6f1931e7228c-kube-api-access-nlkkl\") pod \"multus-admission-controller-69db94689b-zrclr\" (UID: \"3e9d436d-36cf-4f6b-bbd3-6f1931e7228c\") " pod="openshift-multus/multus-admission-controller-69db94689b-zrclr" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.337590 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d187979f-bd07-4330-bc66-d4e12f068dda-signing-cabundle\") pod \"service-ca-74545575db-rwsnz\" (UID: \"d187979f-bd07-4330-bc66-d4e12f068dda\") " pod="openshift-service-ca/service-ca-74545575db-rwsnz" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.345196 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d187979f-bd07-4330-bc66-d4e12f068dda-signing-key\") pod \"service-ca-74545575db-rwsnz\" (UID: \"d187979f-bd07-4330-bc66-d4e12f068dda\") " pod="openshift-service-ca/service-ca-74545575db-rwsnz" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.353397 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-mvtfg"] Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.354073 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-rrp2m" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.355225 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-gflg8" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.359611 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.365416 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdjmg\" (UniqueName: \"kubernetes.io/projected/d187979f-bd07-4330-bc66-d4e12f068dda-kube-api-access-kdjmg\") pod \"service-ca-74545575db-rwsnz\" (UID: \"d187979f-bd07-4330-bc66-d4e12f068dda\") " pod="openshift-service-ca/service-ca-74545575db-rwsnz" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.396889 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.413890 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 22 14:15:57 crc kubenswrapper[5099]: W0122 14:15:57.419375 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20945218_8d75_4f6e_ac07_6815b888e9b7.slice/crio-eb4fed89f958eeb594726e94f62b8f01d581a2ab6c5c6334cc2339711fc5c84c WatchSource:0}: Error finding container eb4fed89f958eeb594726e94f62b8f01d581a2ab6c5c6334cc2339711fc5c84c: Status 404 returned error can't find the container with id eb4fed89f958eeb594726e94f62b8f01d581a2ab6c5c6334cc2339711fc5c84c Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.428127 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e07e6e00-cfcc-4513-b231-8d27833d8687-trusted-ca-bundle\") pod \"console-64d44f6ddf-gflg8\" (UID: \"e07e6e00-cfcc-4513-b231-8d27833d8687\") " pod="openshift-console/console-64d44f6ddf-gflg8" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.428210 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfd2c2fb-9eff-419e-a57c-e8b0c4d73981-serving-cert\") pod \"service-ca-operator-5b9c976747-rrp2m\" (UID: \"bfd2c2fb-9eff-419e-a57c-e8b0c4d73981\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-rrp2m" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.428264 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfd2c2fb-9eff-419e-a57c-e8b0c4d73981-config\") pod \"service-ca-operator-5b9c976747-rrp2m\" (UID: \"bfd2c2fb-9eff-419e-a57c-e8b0c4d73981\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-rrp2m" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.428313 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e07e6e00-cfcc-4513-b231-8d27833d8687-console-oauth-config\") pod \"console-64d44f6ddf-gflg8\" (UID: \"e07e6e00-cfcc-4513-b231-8d27833d8687\") " pod="openshift-console/console-64d44f6ddf-gflg8" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.428351 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e07e6e00-cfcc-4513-b231-8d27833d8687-console-config\") pod \"console-64d44f6ddf-gflg8\" (UID: \"e07e6e00-cfcc-4513-b231-8d27833d8687\") " pod="openshift-console/console-64d44f6ddf-gflg8" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.428374 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6zg9n\" (UniqueName: \"kubernetes.io/projected/e07e6e00-cfcc-4513-b231-8d27833d8687-kube-api-access-6zg9n\") pod \"console-64d44f6ddf-gflg8\" (UID: \"e07e6e00-cfcc-4513-b231-8d27833d8687\") " pod="openshift-console/console-64d44f6ddf-gflg8" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.428406 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smk82\" (UniqueName: \"kubernetes.io/projected/bfd2c2fb-9eff-419e-a57c-e8b0c4d73981-kube-api-access-smk82\") pod \"service-ca-operator-5b9c976747-rrp2m\" (UID: \"bfd2c2fb-9eff-419e-a57c-e8b0c4d73981\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-rrp2m" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.428464 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e07e6e00-cfcc-4513-b231-8d27833d8687-console-serving-cert\") pod \"console-64d44f6ddf-gflg8\" (UID: \"e07e6e00-cfcc-4513-b231-8d27833d8687\") " pod="openshift-console/console-64d44f6ddf-gflg8" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.428516 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e07e6e00-cfcc-4513-b231-8d27833d8687-service-ca\") pod \"console-64d44f6ddf-gflg8\" (UID: \"e07e6e00-cfcc-4513-b231-8d27833d8687\") " pod="openshift-console/console-64d44f6ddf-gflg8" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.428539 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e07e6e00-cfcc-4513-b231-8d27833d8687-oauth-serving-cert\") pod \"console-64d44f6ddf-gflg8\" (UID: \"e07e6e00-cfcc-4513-b231-8d27833d8687\") " pod="openshift-console/console-64d44f6ddf-gflg8" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.430531 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e07e6e00-cfcc-4513-b231-8d27833d8687-oauth-serving-cert\") pod \"console-64d44f6ddf-gflg8\" (UID: \"e07e6e00-cfcc-4513-b231-8d27833d8687\") " pod="openshift-console/console-64d44f6ddf-gflg8" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.435317 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.435417 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.445280 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-9nglq"] Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.445551 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-mvtfg" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.457805 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.475313 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.494877 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.506079 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-v9n8v" event={"ID":"04ae431a-3013-4a10-95c2-3b4cb5c52cd0","Type":"ContainerStarted","Data":"9c825922345dd3259035337e65184de7c47ac599465b143f6fe73bb07349d217"} Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.506117 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-8cgs9" event={"ID":"b1a399d2-6deb-4080-8525-6419d061001b","Type":"ContainerStarted","Data":"bdffbe98dcf955fff4de07c45914077ca1eb424147110c66fea796cfa046366e"} Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.506128 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-9xszn" event={"ID":"94313bd3-0b8e-452e-b3b0-c549aabb8426","Type":"ContainerStarted","Data":"dcc461368224f657e16cb3d574a771c54a519ee2d1dc4f4549e2c77b3542bdf1"} Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.506138 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-rf4fl" event={"ID":"0fe48341-9de1-4b01-957e-16aee9a3eb75","Type":"ContainerStarted","Data":"7442b2e0999b16d8205fa263b4b78aaca7ea61c05b6643e9881901c87494624f"} Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.506185 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-dz5c2"] Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.506346 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-9nglq" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.513355 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.530319 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfd2c2fb-9eff-419e-a57c-e8b0c4d73981-config\") pod \"service-ca-operator-5b9c976747-rrp2m\" (UID: \"bfd2c2fb-9eff-419e-a57c-e8b0c4d73981\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-rrp2m" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.530640 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88zhf\" (UniqueName: \"kubernetes.io/projected/01a5601d-43bd-4873-ad86-10e225b7c31b-kube-api-access-88zhf\") pod \"openshift-apiserver-operator-846cbfc458-mvtfg\" (UID: \"01a5601d-43bd-4873-ad86-10e225b7c31b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-mvtfg" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.530693 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad87e7e8-19c1-4c92-9400-9873a85e80b4-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-9nglq\" (UID: \"ad87e7e8-19c1-4c92-9400-9873a85e80b4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9nglq" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.530717 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ad87e7e8-19c1-4c92-9400-9873a85e80b4-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-9nglq\" (UID: \"ad87e7e8-19c1-4c92-9400-9873a85e80b4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9nglq" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.531106 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g95w7\" (UniqueName: \"kubernetes.io/projected/ad87e7e8-19c1-4c92-9400-9873a85e80b4-kube-api-access-g95w7\") pod \"marketplace-operator-547dbd544d-9nglq\" (UID: \"ad87e7e8-19c1-4c92-9400-9873a85e80b4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9nglq" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.531222 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ad87e7e8-19c1-4c92-9400-9873a85e80b4-tmp\") pod \"marketplace-operator-547dbd544d-9nglq\" (UID: \"ad87e7e8-19c1-4c92-9400-9873a85e80b4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9nglq" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.531258 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-smk82\" (UniqueName: \"kubernetes.io/projected/bfd2c2fb-9eff-419e-a57c-e8b0c4d73981-kube-api-access-smk82\") pod \"service-ca-operator-5b9c976747-rrp2m\" (UID: \"bfd2c2fb-9eff-419e-a57c-e8b0c4d73981\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-rrp2m" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.531502 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01a5601d-43bd-4873-ad86-10e225b7c31b-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-mvtfg\" (UID: \"01a5601d-43bd-4873-ad86-10e225b7c31b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-mvtfg" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.531534 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01a5601d-43bd-4873-ad86-10e225b7c31b-config\") pod \"openshift-apiserver-operator-846cbfc458-mvtfg\" (UID: \"01a5601d-43bd-4873-ad86-10e225b7c31b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-mvtfg" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.531624 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfd2c2fb-9eff-419e-a57c-e8b0c4d73981-serving-cert\") pod \"service-ca-operator-5b9c976747-rrp2m\" (UID: \"bfd2c2fb-9eff-419e-a57c-e8b0c4d73981\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-rrp2m" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.532265 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfd2c2fb-9eff-419e-a57c-e8b0c4d73981-config\") pod \"service-ca-operator-5b9c976747-rrp2m\" (UID: \"bfd2c2fb-9eff-419e-a57c-e8b0c4d73981\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-rrp2m" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.532950 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.540370 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-zrclr" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.541538 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e07e6e00-cfcc-4513-b231-8d27833d8687-console-config\") pod \"console-64d44f6ddf-gflg8\" (UID: \"e07e6e00-cfcc-4513-b231-8d27833d8687\") " pod="openshift-console/console-64d44f6ddf-gflg8" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.543762 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfd2c2fb-9eff-419e-a57c-e8b0c4d73981-serving-cert\") pod \"service-ca-operator-5b9c976747-rrp2m\" (UID: \"bfd2c2fb-9eff-419e-a57c-e8b0c4d73981\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-rrp2m" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.545306 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e07e6e00-cfcc-4513-b231-8d27833d8687-console-oauth-config\") pod \"console-64d44f6ddf-gflg8\" (UID: \"e07e6e00-cfcc-4513-b231-8d27833d8687\") " pod="openshift-console/console-64d44f6ddf-gflg8" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.563864 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.577681 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.585015 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-67tlp" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.586387 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e07e6e00-cfcc-4513-b231-8d27833d8687-trusted-ca-bundle\") pod \"console-64d44f6ddf-gflg8\" (UID: \"e07e6e00-cfcc-4513-b231-8d27833d8687\") " pod="openshift-console/console-64d44f6ddf-gflg8" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.595408 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.604106 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e07e6e00-cfcc-4513-b231-8d27833d8687-console-serving-cert\") pod \"console-64d44f6ddf-gflg8\" (UID: \"e07e6e00-cfcc-4513-b231-8d27833d8687\") " pod="openshift-console/console-64d44f6ddf-gflg8" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.632692 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ad87e7e8-19c1-4c92-9400-9873a85e80b4-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-9nglq\" (UID: \"ad87e7e8-19c1-4c92-9400-9873a85e80b4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9nglq" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.632985 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g95w7\" (UniqueName: \"kubernetes.io/projected/ad87e7e8-19c1-4c92-9400-9873a85e80b4-kube-api-access-g95w7\") pod \"marketplace-operator-547dbd544d-9nglq\" (UID: \"ad87e7e8-19c1-4c92-9400-9873a85e80b4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9nglq" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.633070 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ad87e7e8-19c1-4c92-9400-9873a85e80b4-tmp\") pod \"marketplace-operator-547dbd544d-9nglq\" (UID: \"ad87e7e8-19c1-4c92-9400-9873a85e80b4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9nglq" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.633400 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01a5601d-43bd-4873-ad86-10e225b7c31b-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-mvtfg\" (UID: \"01a5601d-43bd-4873-ad86-10e225b7c31b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-mvtfg" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.633442 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01a5601d-43bd-4873-ad86-10e225b7c31b-config\") pod \"openshift-apiserver-operator-846cbfc458-mvtfg\" (UID: \"01a5601d-43bd-4873-ad86-10e225b7c31b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-mvtfg" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.633516 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ad87e7e8-19c1-4c92-9400-9873a85e80b4-tmp\") pod \"marketplace-operator-547dbd544d-9nglq\" (UID: \"ad87e7e8-19c1-4c92-9400-9873a85e80b4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9nglq" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.633574 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-88zhf\" (UniqueName: \"kubernetes.io/projected/01a5601d-43bd-4873-ad86-10e225b7c31b-kube-api-access-88zhf\") pod \"openshift-apiserver-operator-846cbfc458-mvtfg\" (UID: \"01a5601d-43bd-4873-ad86-10e225b7c31b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-mvtfg" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.633599 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad87e7e8-19c1-4c92-9400-9873a85e80b4-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-9nglq\" (UID: \"ad87e7e8-19c1-4c92-9400-9873a85e80b4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9nglq" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.651490 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zg9n\" (UniqueName: \"kubernetes.io/projected/e07e6e00-cfcc-4513-b231-8d27833d8687-kube-api-access-6zg9n\") pod \"console-64d44f6ddf-gflg8\" (UID: \"e07e6e00-cfcc-4513-b231-8d27833d8687\") " pod="openshift-console/console-64d44f6ddf-gflg8" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.653821 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.660823 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-rwsnz" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.671882 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01a5601d-43bd-4873-ad86-10e225b7c31b-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-mvtfg\" (UID: \"01a5601d-43bd-4873-ad86-10e225b7c31b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-mvtfg" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.672283 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.693868 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.712744 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.732366 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.754325 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.768957 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ad87e7e8-19c1-4c92-9400-9873a85e80b4-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-9nglq\" (UID: \"ad87e7e8-19c1-4c92-9400-9873a85e80b4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9nglq" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.773659 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.799786 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.805495 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad87e7e8-19c1-4c92-9400-9873a85e80b4-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-9nglq\" (UID: \"ad87e7e8-19c1-4c92-9400-9873a85e80b4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9nglq" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.813540 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.832802 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.837403 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e07e6e00-cfcc-4513-b231-8d27833d8687-service-ca\") pod \"console-64d44f6ddf-gflg8\" (UID: \"e07e6e00-cfcc-4513-b231-8d27833d8687\") " pod="openshift-console/console-64d44f6ddf-gflg8" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.838490 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01a5601d-43bd-4873-ad86-10e225b7c31b-config\") pod \"openshift-apiserver-operator-846cbfc458-mvtfg\" (UID: \"01a5601d-43bd-4873-ad86-10e225b7c31b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-mvtfg" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.847680 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-g87pb" event={"ID":"c451b1af-aa09-4621-b237-72a053e5e347","Type":"ContainerStarted","Data":"7e0aa78dfa83b00985b61af5bb8d5b7ee74d57458bb4b3649b835e387ef132a5"} Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.847931 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zdwjf"] Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.848125 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-dz5c2" Jan 22 14:15:57 crc kubenswrapper[5099]: W0122 14:15:57.857451 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a2c820d_64d7_48dc_845e_6fea4213ccbe.slice/crio-c0bb71f0283f4a08232f8cdf313ce5be0fce745867c894ac9091f6968c3a0706 WatchSource:0}: Error finding container c0bb71f0283f4a08232f8cdf313ce5be0fce745867c894ac9091f6968c3a0706: Status 404 returned error can't find the container with id c0bb71f0283f4a08232f8cdf313ce5be0fce745867c894ac9091f6968c3a0706 Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.867234 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-2pf7j" event={"ID":"65c18850-3f68-4603-955b-12a4ab882766","Type":"ContainerStarted","Data":"2aa3c25f2adfbd9543be3cba474ba5fdbe419fc133f21b393f91804815c8680b"} Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.867705 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rj8rm"] Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.867334 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zdwjf" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.900328 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-smk82\" (UniqueName: \"kubernetes.io/projected/bfd2c2fb-9eff-419e-a57c-e8b0c4d73981-kube-api-access-smk82\") pod \"service-ca-operator-5b9c976747-rrp2m\" (UID: \"bfd2c2fb-9eff-419e-a57c-e8b0c4d73981\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-rrp2m" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.910864 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g95w7\" (UniqueName: \"kubernetes.io/projected/ad87e7e8-19c1-4c92-9400-9873a85e80b4-kube-api-access-g95w7\") pod \"marketplace-operator-547dbd544d-9nglq\" (UID: \"ad87e7e8-19c1-4c92-9400-9873a85e80b4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9nglq" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.932104 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-88zhf\" (UniqueName: \"kubernetes.io/projected/01a5601d-43bd-4873-ad86-10e225b7c31b-kube-api-access-88zhf\") pod \"openshift-apiserver-operator-846cbfc458-mvtfg\" (UID: \"01a5601d-43bd-4873-ad86-10e225b7c31b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-mvtfg" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.932138 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.936951 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e31d46d0-1fc7-4e35-9e8a-6f8033388332-machine-approver-tls\") pod \"machine-approver-54c688565-dz5c2\" (UID: \"e31d46d0-1fc7-4e35-9e8a-6f8033388332\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-dz5c2" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.937015 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e31d46d0-1fc7-4e35-9e8a-6f8033388332-config\") pod \"machine-approver-54c688565-dz5c2\" (UID: \"e31d46d0-1fc7-4e35-9e8a-6f8033388332\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-dz5c2" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.937037 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l496\" (UniqueName: \"kubernetes.io/projected/e31d46d0-1fc7-4e35-9e8a-6f8033388332-kube-api-access-9l496\") pod \"machine-approver-54c688565-dz5c2\" (UID: \"e31d46d0-1fc7-4e35-9e8a-6f8033388332\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-dz5c2" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.937102 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8c8db120-c7b7-4a59-933e-9eeda52f3a7d-images\") pod \"machine-config-operator-67c9d58cbb-zdwjf\" (UID: \"8c8db120-c7b7-4a59-933e-9eeda52f3a7d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zdwjf" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.937182 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8c8db120-c7b7-4a59-933e-9eeda52f3a7d-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-zdwjf\" (UID: \"8c8db120-c7b7-4a59-933e-9eeda52f3a7d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zdwjf" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.937250 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e31d46d0-1fc7-4e35-9e8a-6f8033388332-auth-proxy-config\") pod \"machine-approver-54c688565-dz5c2\" (UID: \"e31d46d0-1fc7-4e35-9e8a-6f8033388332\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-dz5c2" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.937298 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c8db120-c7b7-4a59-933e-9eeda52f3a7d-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-zdwjf\" (UID: \"8c8db120-c7b7-4a59-933e-9eeda52f3a7d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zdwjf" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.937398 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m98rg\" (UniqueName: \"kubernetes.io/projected/8c8db120-c7b7-4a59-933e-9eeda52f3a7d-kube-api-access-m98rg\") pod \"machine-config-operator-67c9d58cbb-zdwjf\" (UID: \"8c8db120-c7b7-4a59-933e-9eeda52f3a7d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zdwjf" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.953691 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.972806 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.996689 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-v7shj"] Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.997547 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 22 14:15:57 crc kubenswrapper[5099]: I0122 14:15:57.997650 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rj8rm" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.007664 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-tgcgt"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.007909 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-v7shj" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.013212 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.029770 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-rrp2m" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.030326 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-ldzlj" event={"ID":"513664b2-32c9-4290-9ae7-2400a1c4da84","Type":"ContainerStarted","Data":"bc8d1d7c300478a5f4a7be2b080294a941bac2ea41d6e0964e8993e92cb12240"} Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.030405 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-q6pjq" event={"ID":"c829ef61-cc95-4adc-88a6-433ed349112d","Type":"ContainerStarted","Data":"62bd3c730c19ba4bc58142dc7d3b404d24e0c2d187c0bbe3430a2cd1f1739dc0"} Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.030432 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-t6hxb"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.030500 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-tgcgt" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.033352 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.038317 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ec31b794-4b87-452d-8e77-14a8000d6812-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-v7shj\" (UID: \"ec31b794-4b87-452d-8e77-14a8000d6812\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-v7shj" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.038370 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e31d46d0-1fc7-4e35-9e8a-6f8033388332-auth-proxy-config\") pod \"machine-approver-54c688565-dz5c2\" (UID: \"e31d46d0-1fc7-4e35-9e8a-6f8033388332\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-dz5c2" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.038396 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4ce76424-f6a9-4f4d-8cea-7935ccf75fca-srv-cert\") pod \"olm-operator-5cdf44d969-rj8rm\" (UID: \"4ce76424-f6a9-4f4d-8cea-7935ccf75fca\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rj8rm" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.038416 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6llp\" (UniqueName: \"kubernetes.io/projected/4ce76424-f6a9-4f4d-8cea-7935ccf75fca-kube-api-access-r6llp\") pod \"olm-operator-5cdf44d969-rj8rm\" (UID: \"4ce76424-f6a9-4f4d-8cea-7935ccf75fca\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rj8rm" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.038441 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c8db120-c7b7-4a59-933e-9eeda52f3a7d-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-zdwjf\" (UID: \"8c8db120-c7b7-4a59-933e-9eeda52f3a7d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zdwjf" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.038472 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m98rg\" (UniqueName: \"kubernetes.io/projected/8c8db120-c7b7-4a59-933e-9eeda52f3a7d-kube-api-access-m98rg\") pod \"machine-config-operator-67c9d58cbb-zdwjf\" (UID: \"8c8db120-c7b7-4a59-933e-9eeda52f3a7d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zdwjf" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.038546 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e31d46d0-1fc7-4e35-9e8a-6f8033388332-machine-approver-tls\") pod \"machine-approver-54c688565-dz5c2\" (UID: \"e31d46d0-1fc7-4e35-9e8a-6f8033388332\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-dz5c2" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.038617 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e31d46d0-1fc7-4e35-9e8a-6f8033388332-config\") pod \"machine-approver-54c688565-dz5c2\" (UID: \"e31d46d0-1fc7-4e35-9e8a-6f8033388332\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-dz5c2" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.038646 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9l496\" (UniqueName: \"kubernetes.io/projected/e31d46d0-1fc7-4e35-9e8a-6f8033388332-kube-api-access-9l496\") pod \"machine-approver-54c688565-dz5c2\" (UID: \"e31d46d0-1fc7-4e35-9e8a-6f8033388332\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-dz5c2" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.038696 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4ce76424-f6a9-4f4d-8cea-7935ccf75fca-profile-collector-cert\") pod \"olm-operator-5cdf44d969-rj8rm\" (UID: \"4ce76424-f6a9-4f4d-8cea-7935ccf75fca\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rj8rm" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.038731 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbv4k\" (UniqueName: \"kubernetes.io/projected/ec31b794-4b87-452d-8e77-14a8000d6812-kube-api-access-lbv4k\") pod \"machine-config-controller-f9cdd68f7-v7shj\" (UID: \"ec31b794-4b87-452d-8e77-14a8000d6812\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-v7shj" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.038755 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8c8db120-c7b7-4a59-933e-9eeda52f3a7d-images\") pod \"machine-config-operator-67c9d58cbb-zdwjf\" (UID: \"8c8db120-c7b7-4a59-933e-9eeda52f3a7d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zdwjf" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.038775 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4ce76424-f6a9-4f4d-8cea-7935ccf75fca-tmpfs\") pod \"olm-operator-5cdf44d969-rj8rm\" (UID: \"4ce76424-f6a9-4f4d-8cea-7935ccf75fca\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rj8rm" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.038797 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8c8db120-c7b7-4a59-933e-9eeda52f3a7d-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-zdwjf\" (UID: \"8c8db120-c7b7-4a59-933e-9eeda52f3a7d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zdwjf" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.038822 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ec31b794-4b87-452d-8e77-14a8000d6812-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-v7shj\" (UID: \"ec31b794-4b87-452d-8e77-14a8000d6812\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-v7shj" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.039619 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e31d46d0-1fc7-4e35-9e8a-6f8033388332-auth-proxy-config\") pod \"machine-approver-54c688565-dz5c2\" (UID: \"e31d46d0-1fc7-4e35-9e8a-6f8033388332\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-dz5c2" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.040684 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e31d46d0-1fc7-4e35-9e8a-6f8033388332-config\") pod \"machine-approver-54c688565-dz5c2\" (UID: \"e31d46d0-1fc7-4e35-9e8a-6f8033388332\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-dz5c2" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.041099 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8c8db120-c7b7-4a59-933e-9eeda52f3a7d-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-zdwjf\" (UID: \"8c8db120-c7b7-4a59-933e-9eeda52f3a7d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zdwjf" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.047219 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-gflg8" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.047712 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e31d46d0-1fc7-4e35-9e8a-6f8033388332-machine-approver-tls\") pod \"machine-approver-54c688565-dz5c2\" (UID: \"e31d46d0-1fc7-4e35-9e8a-6f8033388332\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-dz5c2" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.055124 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.060783 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-q6pjq" event={"ID":"c829ef61-cc95-4adc-88a6-433ed349112d","Type":"ContainerStarted","Data":"aa02b86791ecbc8496a6871598d30a45f2327ace0ef836c76b06ae77fe079c2b"} Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.060854 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-flxrl" event={"ID":"b8408c5c-b382-4045-96dd-4204e71d0798","Type":"ContainerStarted","Data":"94d51d702ba0fbc8a520a9dfa17af64d4b717f1ed314461b0a917bcbef7571c0"} Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.060885 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jw44r"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.061307 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8c8db120-c7b7-4a59-933e-9eeda52f3a7d-images\") pod \"machine-config-operator-67c9d58cbb-zdwjf\" (UID: \"8c8db120-c7b7-4a59-933e-9eeda52f3a7d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zdwjf" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.061487 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-t6hxb" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.070652 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-mvtfg" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.073517 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.094292 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.106503 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c8db120-c7b7-4a59-933e-9eeda52f3a7d-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-zdwjf\" (UID: \"8c8db120-c7b7-4a59-933e-9eeda52f3a7d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zdwjf" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.134666 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.139779 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4ce76424-f6a9-4f4d-8cea-7935ccf75fca-profile-collector-cert\") pod \"olm-operator-5cdf44d969-rj8rm\" (UID: \"4ce76424-f6a9-4f4d-8cea-7935ccf75fca\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rj8rm" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.139825 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lbv4k\" (UniqueName: \"kubernetes.io/projected/ec31b794-4b87-452d-8e77-14a8000d6812-kube-api-access-lbv4k\") pod \"machine-config-controller-f9cdd68f7-v7shj\" (UID: \"ec31b794-4b87-452d-8e77-14a8000d6812\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-v7shj" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.139862 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/70535248-a215-4c17-bae6-b193309b6ab3-tmpfs\") pod \"catalog-operator-75ff9f647d-tgcgt\" (UID: \"70535248-a215-4c17-bae6-b193309b6ab3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-tgcgt" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.139908 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4ce76424-f6a9-4f4d-8cea-7935ccf75fca-tmpfs\") pod \"olm-operator-5cdf44d969-rj8rm\" (UID: \"4ce76424-f6a9-4f4d-8cea-7935ccf75fca\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rj8rm" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.140272 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ec31b794-4b87-452d-8e77-14a8000d6812-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-v7shj\" (UID: \"ec31b794-4b87-452d-8e77-14a8000d6812\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-v7shj" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.140333 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ec31b794-4b87-452d-8e77-14a8000d6812-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-v7shj\" (UID: \"ec31b794-4b87-452d-8e77-14a8000d6812\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-v7shj" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.140366 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/70535248-a215-4c17-bae6-b193309b6ab3-srv-cert\") pod \"catalog-operator-75ff9f647d-tgcgt\" (UID: \"70535248-a215-4c17-bae6-b193309b6ab3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-tgcgt" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.140493 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4ce76424-f6a9-4f4d-8cea-7935ccf75fca-srv-cert\") pod \"olm-operator-5cdf44d969-rj8rm\" (UID: \"4ce76424-f6a9-4f4d-8cea-7935ccf75fca\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rj8rm" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.140517 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r6llp\" (UniqueName: \"kubernetes.io/projected/4ce76424-f6a9-4f4d-8cea-7935ccf75fca-kube-api-access-r6llp\") pod \"olm-operator-5cdf44d969-rj8rm\" (UID: \"4ce76424-f6a9-4f4d-8cea-7935ccf75fca\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rj8rm" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.140722 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d12cb731-4816-42a2-9a62-84a8695ece7e-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-t6hxb\" (UID: \"d12cb731-4816-42a2-9a62-84a8695ece7e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-t6hxb" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.140826 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/70535248-a215-4c17-bae6-b193309b6ab3-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-tgcgt\" (UID: \"70535248-a215-4c17-bae6-b193309b6ab3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-tgcgt" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.140965 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt4q8\" (UniqueName: \"kubernetes.io/projected/70535248-a215-4c17-bae6-b193309b6ab3-kube-api-access-xt4q8\") pod \"catalog-operator-75ff9f647d-tgcgt\" (UID: \"70535248-a215-4c17-bae6-b193309b6ab3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-tgcgt" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.141000 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8jnx\" (UniqueName: \"kubernetes.io/projected/d12cb731-4816-42a2-9a62-84a8695ece7e-kube-api-access-z8jnx\") pod \"cluster-samples-operator-6b564684c8-t6hxb\" (UID: \"d12cb731-4816-42a2-9a62-84a8695ece7e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-t6hxb" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.141069 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4ce76424-f6a9-4f4d-8cea-7935ccf75fca-tmpfs\") pod \"olm-operator-5cdf44d969-rj8rm\" (UID: \"4ce76424-f6a9-4f4d-8cea-7935ccf75fca\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rj8rm" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.144814 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ec31b794-4b87-452d-8e77-14a8000d6812-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-v7shj\" (UID: \"ec31b794-4b87-452d-8e77-14a8000d6812\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-v7shj" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.147721 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4ce76424-f6a9-4f4d-8cea-7935ccf75fca-srv-cert\") pod \"olm-operator-5cdf44d969-rj8rm\" (UID: \"4ce76424-f6a9-4f4d-8cea-7935ccf75fca\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rj8rm" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.153107 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.166058 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ec31b794-4b87-452d-8e77-14a8000d6812-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-v7shj\" (UID: \"ec31b794-4b87-452d-8e77-14a8000d6812\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-v7shj" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.167242 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4ce76424-f6a9-4f4d-8cea-7935ccf75fca-profile-collector-cert\") pod \"olm-operator-5cdf44d969-rj8rm\" (UID: \"4ce76424-f6a9-4f4d-8cea-7935ccf75fca\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rj8rm" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.177041 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.185464 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-9nglq" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.193485 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.231893 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-psxhg"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.232183 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jw44r" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.242329 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d12cb731-4816-42a2-9a62-84a8695ece7e-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-t6hxb\" (UID: \"d12cb731-4816-42a2-9a62-84a8695ece7e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-t6hxb" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.242375 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/70535248-a215-4c17-bae6-b193309b6ab3-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-tgcgt\" (UID: \"70535248-a215-4c17-bae6-b193309b6ab3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-tgcgt" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.242408 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xt4q8\" (UniqueName: \"kubernetes.io/projected/70535248-a215-4c17-bae6-b193309b6ab3-kube-api-access-xt4q8\") pod \"catalog-operator-75ff9f647d-tgcgt\" (UID: \"70535248-a215-4c17-bae6-b193309b6ab3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-tgcgt" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.242428 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z8jnx\" (UniqueName: \"kubernetes.io/projected/d12cb731-4816-42a2-9a62-84a8695ece7e-kube-api-access-z8jnx\") pod \"cluster-samples-operator-6b564684c8-t6hxb\" (UID: \"d12cb731-4816-42a2-9a62-84a8695ece7e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-t6hxb" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.242476 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/70535248-a215-4c17-bae6-b193309b6ab3-tmpfs\") pod \"catalog-operator-75ff9f647d-tgcgt\" (UID: \"70535248-a215-4c17-bae6-b193309b6ab3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-tgcgt" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.242510 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/70535248-a215-4c17-bae6-b193309b6ab3-srv-cert\") pod \"catalog-operator-75ff9f647d-tgcgt\" (UID: \"70535248-a215-4c17-bae6-b193309b6ab3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-tgcgt" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.242619 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m98rg\" (UniqueName: \"kubernetes.io/projected/8c8db120-c7b7-4a59-933e-9eeda52f3a7d-kube-api-access-m98rg\") pod \"machine-config-operator-67c9d58cbb-zdwjf\" (UID: \"8c8db120-c7b7-4a59-933e-9eeda52f3a7d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zdwjf" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.244241 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/70535248-a215-4c17-bae6-b193309b6ab3-tmpfs\") pod \"catalog-operator-75ff9f647d-tgcgt\" (UID: \"70535248-a215-4c17-bae6-b193309b6ab3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-tgcgt" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.247032 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zdwjf" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.249356 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/70535248-a215-4c17-bae6-b193309b6ab3-srv-cert\") pod \"catalog-operator-75ff9f647d-tgcgt\" (UID: \"70535248-a215-4c17-bae6-b193309b6ab3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-tgcgt" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.260351 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-b8q59"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.260745 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.266912 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.268519 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k7g4m"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.269209 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.269672 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/70535248-a215-4c17-bae6-b193309b6ab3-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-tgcgt\" (UID: \"70535248-a215-4c17-bae6-b193309b6ab3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-tgcgt" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.274048 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l496\" (UniqueName: \"kubernetes.io/projected/e31d46d0-1fc7-4e35-9e8a-6f8033388332-kube-api-access-9l496\") pod \"machine-approver-54c688565-dz5c2\" (UID: \"e31d46d0-1fc7-4e35-9e8a-6f8033388332\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-dz5c2" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.279667 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.280441 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-5pr7k"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.283775 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k7g4m" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.292506 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-84v6d" event={"ID":"ae5457f8-56c6-47bc-86eb-87dfc7cd63c9","Type":"ContainerStarted","Data":"d9da519f3af0729bf48ea3c4e6ebc6090e7779480758388f8c5bbb15640d8c8c"} Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.292562 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-84v6d" event={"ID":"ae5457f8-56c6-47bc-86eb-87dfc7cd63c9","Type":"ContainerStarted","Data":"0b52ee05fe7680859d639e20f3c380efec486fceeff72102c2b0838dd12bd350"} Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.292584 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-q6pjq"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.292603 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-84v6d"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.292613 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-2pf7j"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.292623 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-rzgsc"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.292634 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-flxrl"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.292645 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-8cgs9"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.292657 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-pfh7d"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.292667 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484855-j94jx"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.292677 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-rf4fl"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.292686 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-v9n8v"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.292696 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-jvzxb"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.292709 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-9xszn"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.292719 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-jpclx"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.292730 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-67tlp"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.292739 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-g87pb"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.292752 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-gflg8"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.292764 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-cm8k9"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.292775 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-rrp2m"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.292788 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-k7dg6"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.292805 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-g55fr"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.298459 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-9g2zx"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.299178 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-5pr7k" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.302919 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.303684 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-qcmvq"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.306181 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-g55fr" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.317960 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-mvtfg"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.318011 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-b8q59"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.318028 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-tgcgt"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.318077 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-zrclr"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.318115 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jw44r"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.318130 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-9nglq"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.318146 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-v7shj"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.318181 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-4btxw"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.319286 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-9g2zx" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.320098 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qcmvq" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.326256 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d12cb731-4816-42a2-9a62-84a8695ece7e-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-t6hxb\" (UID: \"d12cb731-4816-42a2-9a62-84a8695ece7e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-t6hxb" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.327106 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.332888 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-wrhpn"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.338540 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4btxw" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.348587 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/83de5641-52fb-4052-b0fc-28e02eef5f3a-tmp-dir\") pod \"dns-default-qcmvq\" (UID: \"83de5641-52fb-4052-b0fc-28e02eef5f3a\") " pod="openshift-dns/dns-default-qcmvq" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.348684 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfv6k\" (UniqueName: \"kubernetes.io/projected/83de5641-52fb-4052-b0fc-28e02eef5f3a-kube-api-access-gfv6k\") pod \"dns-default-qcmvq\" (UID: \"83de5641-52fb-4052-b0fc-28e02eef5f3a\") " pod="openshift-dns/dns-default-qcmvq" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.348715 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7bb9dbcc-e884-49d5-9f83-751acd16b0e5-certs\") pod \"machine-config-server-9g2zx\" (UID: \"7bb9dbcc-e884-49d5-9f83-751acd16b0e5\") " pod="openshift-machine-config-operator/machine-config-server-9g2zx" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.348790 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83de5641-52fb-4052-b0fc-28e02eef5f3a-config-volume\") pod \"dns-default-qcmvq\" (UID: \"83de5641-52fb-4052-b0fc-28e02eef5f3a\") " pod="openshift-dns/dns-default-qcmvq" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.348901 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr2dc\" (UniqueName: \"kubernetes.io/projected/7bb9dbcc-e884-49d5-9f83-751acd16b0e5-kube-api-access-sr2dc\") pod \"machine-config-server-9g2zx\" (UID: \"7bb9dbcc-e884-49d5-9f83-751acd16b0e5\") " pod="openshift-machine-config-operator/machine-config-server-9g2zx" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.348970 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/83de5641-52fb-4052-b0fc-28e02eef5f3a-metrics-tls\") pod \"dns-default-qcmvq\" (UID: \"83de5641-52fb-4052-b0fc-28e02eef5f3a\") " pod="openshift-dns/dns-default-qcmvq" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.348992 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7bb9dbcc-e884-49d5-9f83-751acd16b0e5-node-bootstrap-token\") pod \"machine-config-server-9g2zx\" (UID: \"7bb9dbcc-e884-49d5-9f83-751acd16b0e5\") " pod="openshift-machine-config-operator/machine-config-server-9g2zx" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.349089 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxs9h\" (UniqueName: \"kubernetes.io/projected/5c0223c3-0295-4e25-91af-f648e4b081b8-kube-api-access-kxs9h\") pod \"control-plane-machine-set-operator-75ffdb6fcd-k7g4m\" (UID: \"5c0223c3-0295-4e25-91af-f648e4b081b8\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k7g4m" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.349184 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5c0223c3-0295-4e25-91af-f648e4b081b8-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-k7g4m\" (UID: \"5c0223c3-0295-4e25-91af-f648e4b081b8\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k7g4m" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.368508 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-qcmvq"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.369060 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-xg9k5"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.407253 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.409417 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-t6hxb"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.409778 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-psxhg"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.409842 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-b798r"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.410840 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rj8rm"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.410912 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-w89ff"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.410968 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-rwsnz"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.411035 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k7g4m"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.411092 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4btxw"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.411147 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zdwjf"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.415282 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-wrhpn"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.415408 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-84v6d"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.415464 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-2pf7j"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.415528 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-ldzlj"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.415583 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-q6pjq"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.415648 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-flxrl"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.415741 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-v9n8v"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.415804 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-rf4fl"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.415667 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.410147 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-wrhpn" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.431649 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-g87pb"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.431718 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-9xszn"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.433960 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.440047 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbv4k\" (UniqueName: \"kubernetes.io/projected/ec31b794-4b87-452d-8e77-14a8000d6812-kube-api-access-lbv4k\") pod \"machine-config-controller-f9cdd68f7-v7shj\" (UID: \"ec31b794-4b87-452d-8e77-14a8000d6812\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-v7shj" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.445584 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-8cgs9"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.453879 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kxs9h\" (UniqueName: \"kubernetes.io/projected/5c0223c3-0295-4e25-91af-f648e4b081b8-kube-api-access-kxs9h\") pod \"control-plane-machine-set-operator-75ffdb6fcd-k7g4m\" (UID: \"5c0223c3-0295-4e25-91af-f648e4b081b8\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k7g4m" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.453942 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5c0223c3-0295-4e25-91af-f648e4b081b8-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-k7g4m\" (UID: \"5c0223c3-0295-4e25-91af-f648e4b081b8\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k7g4m" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.453981 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/83de5641-52fb-4052-b0fc-28e02eef5f3a-tmp-dir\") pod \"dns-default-qcmvq\" (UID: \"83de5641-52fb-4052-b0fc-28e02eef5f3a\") " pod="openshift-dns/dns-default-qcmvq" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.454010 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gfv6k\" (UniqueName: \"kubernetes.io/projected/83de5641-52fb-4052-b0fc-28e02eef5f3a-kube-api-access-gfv6k\") pod \"dns-default-qcmvq\" (UID: \"83de5641-52fb-4052-b0fc-28e02eef5f3a\") " pod="openshift-dns/dns-default-qcmvq" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.454025 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7bb9dbcc-e884-49d5-9f83-751acd16b0e5-certs\") pod \"machine-config-server-9g2zx\" (UID: \"7bb9dbcc-e884-49d5-9f83-751acd16b0e5\") " pod="openshift-machine-config-operator/machine-config-server-9g2zx" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.454059 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83de5641-52fb-4052-b0fc-28e02eef5f3a-config-volume\") pod \"dns-default-qcmvq\" (UID: \"83de5641-52fb-4052-b0fc-28e02eef5f3a\") " pod="openshift-dns/dns-default-qcmvq" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.454099 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sr2dc\" (UniqueName: \"kubernetes.io/projected/7bb9dbcc-e884-49d5-9f83-751acd16b0e5-kube-api-access-sr2dc\") pod \"machine-config-server-9g2zx\" (UID: \"7bb9dbcc-e884-49d5-9f83-751acd16b0e5\") " pod="openshift-machine-config-operator/machine-config-server-9g2zx" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.454125 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/83de5641-52fb-4052-b0fc-28e02eef5f3a-metrics-tls\") pod \"dns-default-qcmvq\" (UID: \"83de5641-52fb-4052-b0fc-28e02eef5f3a\") " pod="openshift-dns/dns-default-qcmvq" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.454140 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7bb9dbcc-e884-49d5-9f83-751acd16b0e5-node-bootstrap-token\") pod \"machine-config-server-9g2zx\" (UID: \"7bb9dbcc-e884-49d5-9f83-751acd16b0e5\") " pod="openshift-machine-config-operator/machine-config-server-9g2zx" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.454334 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.457552 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/83de5641-52fb-4052-b0fc-28e02eef5f3a-tmp-dir\") pod \"dns-default-qcmvq\" (UID: \"83de5641-52fb-4052-b0fc-28e02eef5f3a\") " pod="openshift-dns/dns-default-qcmvq" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.461697 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-k7dg6"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.476302 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-jpclx"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.477129 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6llp\" (UniqueName: \"kubernetes.io/projected/4ce76424-f6a9-4f4d-8cea-7935ccf75fca-kube-api-access-r6llp\") pod \"olm-operator-5cdf44d969-rj8rm\" (UID: \"4ce76424-f6a9-4f4d-8cea-7935ccf75fca\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rj8rm" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.483563 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.486489 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-xg9k5"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.490027 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-rzgsc"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.495646 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484855-j94jx"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.503343 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-jvzxb"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.513472 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-b798r"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.514651 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-w89ff"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.515826 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-g87pb" event={"ID":"c451b1af-aa09-4621-b237-72a053e5e347","Type":"ContainerStarted","Data":"8917ad624119c3982874557333de2995f0f10860d8e969fb40ae0db8ef7e18fb"} Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.515912 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-g87pb" event={"ID":"c451b1af-aa09-4621-b237-72a053e5e347","Type":"ContainerStarted","Data":"5ea5bf07d41b30354575b31681ea56be19c897253f08d4fd48610b45df44b7c2"} Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.517509 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-cm8k9"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.519726 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-pfh7d"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.524286 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-zrclr"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.527543 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-67tlp"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.527739 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt4q8\" (UniqueName: \"kubernetes.io/projected/70535248-a215-4c17-bae6-b193309b6ab3-kube-api-access-xt4q8\") pod \"catalog-operator-75ff9f647d-tgcgt\" (UID: \"70535248-a215-4c17-bae6-b193309b6ab3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-tgcgt" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.529002 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-rrp2m"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.534044 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.536419 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-dz5c2" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.541234 5099 generic.go:358] "Generic (PLEG): container finished" podID="65c18850-3f68-4603-955b-12a4ab882766" containerID="505b567b1e1841c8586854f4d1a2ca277d8170c0d9fb8325ba2244222bf5df04" exitCode=0 Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.541474 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-2pf7j" event={"ID":"65c18850-3f68-4603-955b-12a4ab882766","Type":"ContainerDied","Data":"505b567b1e1841c8586854f4d1a2ca277d8170c0d9fb8325ba2244222bf5df04"} Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.542938 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8jnx\" (UniqueName: \"kubernetes.io/projected/d12cb731-4816-42a2-9a62-84a8695ece7e-kube-api-access-z8jnx\") pod \"cluster-samples-operator-6b564684c8-t6hxb\" (UID: \"d12cb731-4816-42a2-9a62-84a8695ece7e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-t6hxb" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.545533 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-ldzlj" event={"ID":"513664b2-32c9-4290-9ae7-2400a1c4da84","Type":"ContainerStarted","Data":"6eccc13999d51aec147ee234025a66ddbc32ac298960c5ae3e0ec7b983a0aeea"} Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.546084 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-ldzlj" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.547877 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-w89ff" event={"ID":"ec74add6-7b16-4c96-aee1-336b53788c2a","Type":"ContainerStarted","Data":"fe8bc97d7f0e5d81f690bb74f406799e5adc5832bbc0668d71e223219a4d92a1"} Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.554137 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-q6pjq" event={"ID":"c829ef61-cc95-4adc-88a6-433ed349112d","Type":"ContainerStarted","Data":"5c3302badbf458663a678d0eb5216131e90845b795566179b163600eed1f75b7"} Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.555301 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.564369 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" event={"ID":"7894c17b-6de7-426e-b27a-4834b7186e8f","Type":"ContainerStarted","Data":"98c20b1c3525153709b26db562ee5682f654df4b1d0cab9fbf061f3351d948c4"} Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.574236 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.575633 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-ldzlj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.575675 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-ldzlj" podUID="513664b2-32c9-4290-9ae7-2400a1c4da84" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.579180 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-j94jx" event={"ID":"0a2c820d-64d7-48dc-845e-6fea4213ccbe","Type":"ContainerStarted","Data":"c0bb71f0283f4a08232f8cdf313ce5be0fce745867c894ac9091f6968c3a0706"} Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.587846 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k7dg6" event={"ID":"0bbff495-517c-4f7c-b0e0-797cb63884c9","Type":"ContainerStarted","Data":"50dcfe6809b0864ac6f2bc14d1ac88044aca0099b299f9275fc8faabf4b2a43b"} Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.587878 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k7dg6" event={"ID":"0bbff495-517c-4f7c-b0e0-797cb63884c9","Type":"ContainerStarted","Data":"33e5565d82a8a335fa9da33cfde56e7054663005192cfdfda73e852a2d313610"} Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.589001 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k7dg6" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.592183 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-jpclx" event={"ID":"46d7f2bf-b38a-4bea-ba85-211ffe114151","Type":"ContainerStarted","Data":"f9c8b25133a9386184d01bcd5c7966bd2cb96e92c8404b048a6e121227b9641a"} Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.598634 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-flxrl" event={"ID":"b8408c5c-b382-4045-96dd-4204e71d0798","Type":"ContainerStarted","Data":"80665e203a5b904ab3445d6d20eb44a2c882d3d503100bf07789219e7b0ca462"} Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.601211 5099 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-k7dg6 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.601407 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k7dg6" podUID="0bbff495-517c-4f7c-b0e0-797cb63884c9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.601667 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.612099 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-v9n8v" event={"ID":"04ae431a-3013-4a10-95c2-3b4cb5c52cd0","Type":"ContainerStarted","Data":"8b942966ddb7a7de3a8a9b89c326907686c911824245320cc400086164a51866"} Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.613700 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.620603 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-cm8k9" event={"ID":"f258327b-5062-4466-bd24-cc22c2f56087","Type":"ContainerStarted","Data":"a443273c0431f8fda83b9930668292a56e5ce95fb0497c81c4112f569f753177"} Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.621973 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-jvzxb" event={"ID":"ee755aa6-a943-4869-a688-c0da5d38aafa","Type":"ContainerStarted","Data":"d2ceb29e9fe3379227fe252e368a8a6268088dc541eb471c68bfe6c1a05b7a80"} Jan 22 14:15:58 crc kubenswrapper[5099]: W0122 14:15:58.626999 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd69099b1_1e3d_4007_b1d1_039d6df91bb7.slice/crio-8baba0ca28a434358dc14100b24a20bdd55755d9453511e6bfea23a42ee19a11 WatchSource:0}: Error finding container 8baba0ca28a434358dc14100b24a20bdd55755d9453511e6bfea23a42ee19a11: Status 404 returned error can't find the container with id 8baba0ca28a434358dc14100b24a20bdd55755d9453511e6bfea23a42ee19a11 Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.627375 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-xg9k5" event={"ID":"20945218-8d75-4f6e-ac07-6815b888e9b7","Type":"ContainerStarted","Data":"eb4fed89f958eeb594726e94f62b8f01d581a2ab6c5c6334cc2339711fc5c84c"} Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.632216 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-rwsnz"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.632976 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.638604 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-b798r" event={"ID":"ea2249f7-4927-4920-9ce2-aaa3cc5749ba","Type":"ContainerStarted","Data":"5e993b2d4b0381073cdcabbabfa3d856ace01ca7d41a1804135658833f295580"} Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.648504 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-rzgsc" event={"ID":"8efd2ae7-b5de-4a8a-beb9-fcc7adcb2241","Type":"ContainerStarted","Data":"11588afb70a45dfaf63b82945a62f0f404d1a9e826ed7bae0f34e534904e0fb6"} Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.648588 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-rzgsc" event={"ID":"8efd2ae7-b5de-4a8a-beb9-fcc7adcb2241","Type":"ContainerStarted","Data":"cf6890ab7e9ffc9bf3446b96491a439d34c4bfda8a66c6a6dbb2b6b525d390b7"} Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.657587 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.681028 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-8cgs9" event={"ID":"b1a399d2-6deb-4080-8525-6419d061001b","Type":"ContainerStarted","Data":"0c2bc6fd42c46982e221b6cf114fbe469ad04f0463680c492924a08fc06de362"} Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.681390 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.682056 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-8cgs9" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.683459 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-9nglq"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.688253 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rj8rm" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.693000 5099 patch_prober.go:28] interesting pod/console-operator-67c89758df-8cgs9 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.693073 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-8cgs9" podUID="b1a399d2-6deb-4080-8525-6419d061001b" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.693251 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.697561 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-9xszn" event={"ID":"94313bd3-0b8e-452e-b3b0-c549aabb8426","Type":"ContainerStarted","Data":"34047ea6e030398776cb06e1fb65480408e9951a045599fb9b6019409b68b295"} Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.706750 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-9xszn" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.714383 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-v7shj" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.715840 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.717839 5099 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-9xszn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.717903 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-9xszn" podUID="94313bd3-0b8e-452e-b3b0-c549aabb8426" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.735744 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.749402 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-tgcgt" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.758981 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.774389 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-t6hxb" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.776180 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.794892 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-rf4fl" event={"ID":"0fe48341-9de1-4b01-957e-16aee9a3eb75","Type":"ContainerStarted","Data":"843ed03e3ea2acc7c5b996aeedf3203d0e5e7404e45a0efe90c204ada269b8e4"} Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.811358 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.813744 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.833674 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.874381 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5c0223c3-0295-4e25-91af-f648e4b081b8-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-k7g4m\" (UID: \"5c0223c3-0295-4e25-91af-f648e4b081b8\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k7g4m" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.876828 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.878101 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.888045 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-gflg8"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.896014 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.915192 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.944551 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-mvtfg"] Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.945557 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 22 14:15:58 crc kubenswrapper[5099]: W0122 14:15:58.946554 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode31d46d0_1fc7_4e35_9e8a_6f8033388332.slice/crio-1aad9c5fe9cc9e76f1d35bcc669c773af8ca8df48478bd7225186a329d18b311 WatchSource:0}: Error finding container 1aad9c5fe9cc9e76f1d35bcc669c773af8ca8df48478bd7225186a329d18b311: Status 404 returned error can't find the container with id 1aad9c5fe9cc9e76f1d35bcc669c773af8ca8df48478bd7225186a329d18b311 Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.964431 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.975667 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 22 14:15:58 crc kubenswrapper[5099]: I0122 14:15:58.997992 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.017999 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.034621 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.052541 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.062486 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7bb9dbcc-e884-49d5-9f83-751acd16b0e5-node-bootstrap-token\") pod \"machine-config-server-9g2zx\" (UID: \"7bb9dbcc-e884-49d5-9f83-751acd16b0e5\") " pod="openshift-machine-config-operator/machine-config-server-9g2zx" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.062850 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7bb9dbcc-e884-49d5-9f83-751acd16b0e5-certs\") pod \"machine-config-server-9g2zx\" (UID: \"7bb9dbcc-e884-49d5-9f83-751acd16b0e5\") " pod="openshift-machine-config-operator/machine-config-server-9g2zx" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.076913 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.091240 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83de5641-52fb-4052-b0fc-28e02eef5f3a-config-volume\") pod \"dns-default-qcmvq\" (UID: \"83de5641-52fb-4052-b0fc-28e02eef5f3a\") " pod="openshift-dns/dns-default-qcmvq" Jan 22 14:15:59 crc kubenswrapper[5099]: W0122 14:15:59.095291 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01a5601d_43bd_4873_ad86_10e225b7c31b.slice/crio-007ddc769cae6f71ab75de11212aef40ec315194d372deeb89aaffbc09d232ae WatchSource:0}: Error finding container 007ddc769cae6f71ab75de11212aef40ec315194d372deeb89aaffbc09d232ae: Status 404 returned error can't find the container with id 007ddc769cae6f71ab75de11212aef40ec315194d372deeb89aaffbc09d232ae Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.095586 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.122659 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.144238 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/83de5641-52fb-4052-b0fc-28e02eef5f3a-metrics-tls\") pod \"dns-default-qcmvq\" (UID: \"83de5641-52fb-4052-b0fc-28e02eef5f3a\") " pod="openshift-dns/dns-default-qcmvq" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.158923 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.173562 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.199467 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.216507 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.259067 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zdwjf"] Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.264353 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.276754 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.293544 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.361023 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxs9h\" (UniqueName: \"kubernetes.io/projected/5c0223c3-0295-4e25-91af-f648e4b081b8-kube-api-access-kxs9h\") pod \"control-plane-machine-set-operator-75ffdb6fcd-k7g4m\" (UID: \"5c0223c3-0295-4e25-91af-f648e4b081b8\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k7g4m" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.372830 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfv6k\" (UniqueName: \"kubernetes.io/projected/83de5641-52fb-4052-b0fc-28e02eef5f3a-kube-api-access-gfv6k\") pod \"dns-default-qcmvq\" (UID: \"83de5641-52fb-4052-b0fc-28e02eef5f3a\") " pod="openshift-dns/dns-default-qcmvq" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.382031 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr2dc\" (UniqueName: \"kubernetes.io/projected/7bb9dbcc-e884-49d5-9f83-751acd16b0e5-kube-api-access-sr2dc\") pod \"machine-config-server-9g2zx\" (UID: \"7bb9dbcc-e884-49d5-9f83-751acd16b0e5\") " pod="openshift-machine-config-operator/machine-config-server-9g2zx" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.400317 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b75d2eb4-efad-414a-8c7e-c64e0f83cb2b-stats-auth\") pod \"router-default-68cf44c8b8-5pr7k\" (UID: \"b75d2eb4-efad-414a-8c7e-c64e0f83cb2b\") " pod="openshift-ingress/router-default-68cf44c8b8-5pr7k" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.400363 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/00219034-b44a-4db2-ad80-b04ff5eacac5-registry-tls\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.400383 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztwtl\" (UniqueName: \"kubernetes.io/projected/00219034-b44a-4db2-ad80-b04ff5eacac5-kube-api-access-ztwtl\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.400400 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwstw\" (UniqueName: \"kubernetes.io/projected/4944256e-76fd-4652-80c6-a5f9217aadc3-kube-api-access-jwstw\") pod \"cni-sysctl-allowlist-ds-g55fr\" (UID: \"4944256e-76fd-4652-80c6-a5f9217aadc3\") " pod="openshift-multus/cni-sysctl-allowlist-ds-g55fr" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.400418 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7870c98b-5f29-4b36-9b11-e2564ff6bad7-config\") pod \"openshift-controller-manager-operator-686468bdd5-jw44r\" (UID: \"7870c98b-5f29-4b36-9b11-e2564ff6bad7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jw44r" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.400440 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c42ec0c1-434b-41d2-b134-decdccef1700-config\") pod \"apiserver-9ddfb9f55-b8q59\" (UID: \"c42ec0c1-434b-41d2-b134-decdccef1700\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.400458 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c42ec0c1-434b-41d2-b134-decdccef1700-encryption-config\") pod \"apiserver-9ddfb9f55-b8q59\" (UID: \"c42ec0c1-434b-41d2-b134-decdccef1700\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.400490 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b75d2eb4-efad-414a-8c7e-c64e0f83cb2b-default-certificate\") pod \"router-default-68cf44c8b8-5pr7k\" (UID: \"b75d2eb4-efad-414a-8c7e-c64e0f83cb2b\") " pod="openshift-ingress/router-default-68cf44c8b8-5pr7k" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.401415 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c42ec0c1-434b-41d2-b134-decdccef1700-audit-dir\") pod \"apiserver-9ddfb9f55-b8q59\" (UID: \"c42ec0c1-434b-41d2-b134-decdccef1700\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.401451 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c42ec0c1-434b-41d2-b134-decdccef1700-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-b8q59\" (UID: \"c42ec0c1-434b-41d2-b134-decdccef1700\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.401470 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxgbh\" (UniqueName: \"kubernetes.io/projected/b75d2eb4-efad-414a-8c7e-c64e0f83cb2b-kube-api-access-nxgbh\") pod \"router-default-68cf44c8b8-5pr7k\" (UID: \"b75d2eb4-efad-414a-8c7e-c64e0f83cb2b\") " pod="openshift-ingress/router-default-68cf44c8b8-5pr7k" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.401485 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/4944256e-76fd-4652-80c6-a5f9217aadc3-ready\") pod \"cni-sysctl-allowlist-ds-g55fr\" (UID: \"4944256e-76fd-4652-80c6-a5f9217aadc3\") " pod="openshift-multus/cni-sysctl-allowlist-ds-g55fr" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.401504 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c42ec0c1-434b-41d2-b134-decdccef1700-audit\") pod \"apiserver-9ddfb9f55-b8q59\" (UID: \"c42ec0c1-434b-41d2-b134-decdccef1700\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.401518 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c42ec0c1-434b-41d2-b134-decdccef1700-serving-cert\") pod \"apiserver-9ddfb9f55-b8q59\" (UID: \"c42ec0c1-434b-41d2-b134-decdccef1700\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.401596 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c42ec0c1-434b-41d2-b134-decdccef1700-etcd-client\") pod \"apiserver-9ddfb9f55-b8q59\" (UID: \"c42ec0c1-434b-41d2-b134-decdccef1700\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.401626 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/00219034-b44a-4db2-ad80-b04ff5eacac5-registry-certificates\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.401653 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b75d2eb4-efad-414a-8c7e-c64e0f83cb2b-metrics-certs\") pod \"router-default-68cf44c8b8-5pr7k\" (UID: \"b75d2eb4-efad-414a-8c7e-c64e0f83cb2b\") " pod="openshift-ingress/router-default-68cf44c8b8-5pr7k" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.401674 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7870c98b-5f29-4b36-9b11-e2564ff6bad7-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-jw44r\" (UID: \"7870c98b-5f29-4b36-9b11-e2564ff6bad7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jw44r" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.401727 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7870c98b-5f29-4b36-9b11-e2564ff6bad7-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-jw44r\" (UID: \"7870c98b-5f29-4b36-9b11-e2564ff6bad7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jw44r" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.401744 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b75d2eb4-efad-414a-8c7e-c64e0f83cb2b-service-ca-bundle\") pod \"router-default-68cf44c8b8-5pr7k\" (UID: \"b75d2eb4-efad-414a-8c7e-c64e0f83cb2b\") " pod="openshift-ingress/router-default-68cf44c8b8-5pr7k" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.401759 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c42ec0c1-434b-41d2-b134-decdccef1700-node-pullsecrets\") pod \"apiserver-9ddfb9f55-b8q59\" (UID: \"c42ec0c1-434b-41d2-b134-decdccef1700\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.401775 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4944256e-76fd-4652-80c6-a5f9217aadc3-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-g55fr\" (UID: \"4944256e-76fd-4652-80c6-a5f9217aadc3\") " pod="openshift-multus/cni-sysctl-allowlist-ds-g55fr" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.401831 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/00219034-b44a-4db2-ad80-b04ff5eacac5-bound-sa-token\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.401847 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f877fa69-0d40-4ecd-a63a-eba7bb0459e6-cert\") pod \"ingress-canary-4btxw\" (UID: \"f877fa69-0d40-4ecd-a63a-eba7bb0459e6\") " pod="openshift-ingress-canary/ingress-canary-4btxw" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.401863 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c42ec0c1-434b-41d2-b134-decdccef1700-image-import-ca\") pod \"apiserver-9ddfb9f55-b8q59\" (UID: \"c42ec0c1-434b-41d2-b134-decdccef1700\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.401894 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/00219034-b44a-4db2-ad80-b04ff5eacac5-trusted-ca\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.401924 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/00219034-b44a-4db2-ad80-b04ff5eacac5-ca-trust-extracted\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.401946 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/00219034-b44a-4db2-ad80-b04ff5eacac5-installation-pull-secrets\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.401980 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlvnj\" (UniqueName: \"kubernetes.io/projected/f877fa69-0d40-4ecd-a63a-eba7bb0459e6-kube-api-access-xlvnj\") pod \"ingress-canary-4btxw\" (UID: \"f877fa69-0d40-4ecd-a63a-eba7bb0459e6\") " pod="openshift-ingress-canary/ingress-canary-4btxw" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.402000 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhbw9\" (UniqueName: \"kubernetes.io/projected/c42ec0c1-434b-41d2-b134-decdccef1700-kube-api-access-dhbw9\") pod \"apiserver-9ddfb9f55-b8q59\" (UID: \"c42ec0c1-434b-41d2-b134-decdccef1700\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.402023 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxbtc\" (UniqueName: \"kubernetes.io/projected/7870c98b-5f29-4b36-9b11-e2564ff6bad7-kube-api-access-vxbtc\") pod \"openshift-controller-manager-operator-686468bdd5-jw44r\" (UID: \"7870c98b-5f29-4b36-9b11-e2564ff6bad7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jw44r" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.402091 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c42ec0c1-434b-41d2-b134-decdccef1700-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-b8q59\" (UID: \"c42ec0c1-434b-41d2-b134-decdccef1700\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.402143 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4944256e-76fd-4652-80c6-a5f9217aadc3-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-g55fr\" (UID: \"4944256e-76fd-4652-80c6-a5f9217aadc3\") " pod="openshift-multus/cni-sysctl-allowlist-ds-g55fr" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.402192 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:15:59 crc kubenswrapper[5099]: E0122 14:15:59.402524 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:15:59.902510126 +0000 UTC m=+117.610260363 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.415541 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qcmvq" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.507881 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.508353 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/00219034-b44a-4db2-ad80-b04ff5eacac5-ca-trust-extracted\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.508383 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/00219034-b44a-4db2-ad80-b04ff5eacac5-installation-pull-secrets\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.508425 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xlvnj\" (UniqueName: \"kubernetes.io/projected/f877fa69-0d40-4ecd-a63a-eba7bb0459e6-kube-api-access-xlvnj\") pod \"ingress-canary-4btxw\" (UID: \"f877fa69-0d40-4ecd-a63a-eba7bb0459e6\") " pod="openshift-ingress-canary/ingress-canary-4btxw" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.508444 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dhbw9\" (UniqueName: \"kubernetes.io/projected/c42ec0c1-434b-41d2-b134-decdccef1700-kube-api-access-dhbw9\") pod \"apiserver-9ddfb9f55-b8q59\" (UID: \"c42ec0c1-434b-41d2-b134-decdccef1700\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.508462 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vxbtc\" (UniqueName: \"kubernetes.io/projected/7870c98b-5f29-4b36-9b11-e2564ff6bad7-kube-api-access-vxbtc\") pod \"openshift-controller-manager-operator-686468bdd5-jw44r\" (UID: \"7870c98b-5f29-4b36-9b11-e2564ff6bad7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jw44r" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.508480 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/82eb9134-c59e-4bae-a74e-02ec60240232-csi-data-dir\") pod \"csi-hostpathplugin-wrhpn\" (UID: \"82eb9134-c59e-4bae-a74e-02ec60240232\") " pod="hostpath-provisioner/csi-hostpathplugin-wrhpn" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.508501 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c42ec0c1-434b-41d2-b134-decdccef1700-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-b8q59\" (UID: \"c42ec0c1-434b-41d2-b134-decdccef1700\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.508537 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4944256e-76fd-4652-80c6-a5f9217aadc3-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-g55fr\" (UID: \"4944256e-76fd-4652-80c6-a5f9217aadc3\") " pod="openshift-multus/cni-sysctl-allowlist-ds-g55fr" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.508789 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b75d2eb4-efad-414a-8c7e-c64e0f83cb2b-stats-auth\") pod \"router-default-68cf44c8b8-5pr7k\" (UID: \"b75d2eb4-efad-414a-8c7e-c64e0f83cb2b\") " pod="openshift-ingress/router-default-68cf44c8b8-5pr7k" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.508812 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/00219034-b44a-4db2-ad80-b04ff5eacac5-registry-tls\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.508828 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ztwtl\" (UniqueName: \"kubernetes.io/projected/00219034-b44a-4db2-ad80-b04ff5eacac5-kube-api-access-ztwtl\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.508851 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jwstw\" (UniqueName: \"kubernetes.io/projected/4944256e-76fd-4652-80c6-a5f9217aadc3-kube-api-access-jwstw\") pod \"cni-sysctl-allowlist-ds-g55fr\" (UID: \"4944256e-76fd-4652-80c6-a5f9217aadc3\") " pod="openshift-multus/cni-sysctl-allowlist-ds-g55fr" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.508875 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7870c98b-5f29-4b36-9b11-e2564ff6bad7-config\") pod \"openshift-controller-manager-operator-686468bdd5-jw44r\" (UID: \"7870c98b-5f29-4b36-9b11-e2564ff6bad7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jw44r" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.508903 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c42ec0c1-434b-41d2-b134-decdccef1700-config\") pod \"apiserver-9ddfb9f55-b8q59\" (UID: \"c42ec0c1-434b-41d2-b134-decdccef1700\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.508919 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c42ec0c1-434b-41d2-b134-decdccef1700-encryption-config\") pod \"apiserver-9ddfb9f55-b8q59\" (UID: \"c42ec0c1-434b-41d2-b134-decdccef1700\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.508973 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b75d2eb4-efad-414a-8c7e-c64e0f83cb2b-default-certificate\") pod \"router-default-68cf44c8b8-5pr7k\" (UID: \"b75d2eb4-efad-414a-8c7e-c64e0f83cb2b\") " pod="openshift-ingress/router-default-68cf44c8b8-5pr7k" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.508997 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/82eb9134-c59e-4bae-a74e-02ec60240232-socket-dir\") pod \"csi-hostpathplugin-wrhpn\" (UID: \"82eb9134-c59e-4bae-a74e-02ec60240232\") " pod="hostpath-provisioner/csi-hostpathplugin-wrhpn" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.509032 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c42ec0c1-434b-41d2-b134-decdccef1700-audit-dir\") pod \"apiserver-9ddfb9f55-b8q59\" (UID: \"c42ec0c1-434b-41d2-b134-decdccef1700\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.509069 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c42ec0c1-434b-41d2-b134-decdccef1700-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-b8q59\" (UID: \"c42ec0c1-434b-41d2-b134-decdccef1700\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.509098 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nxgbh\" (UniqueName: \"kubernetes.io/projected/b75d2eb4-efad-414a-8c7e-c64e0f83cb2b-kube-api-access-nxgbh\") pod \"router-default-68cf44c8b8-5pr7k\" (UID: \"b75d2eb4-efad-414a-8c7e-c64e0f83cb2b\") " pod="openshift-ingress/router-default-68cf44c8b8-5pr7k" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.509315 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/4944256e-76fd-4652-80c6-a5f9217aadc3-ready\") pod \"cni-sysctl-allowlist-ds-g55fr\" (UID: \"4944256e-76fd-4652-80c6-a5f9217aadc3\") " pod="openshift-multus/cni-sysctl-allowlist-ds-g55fr" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.509357 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c42ec0c1-434b-41d2-b134-decdccef1700-audit\") pod \"apiserver-9ddfb9f55-b8q59\" (UID: \"c42ec0c1-434b-41d2-b134-decdccef1700\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.509373 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c42ec0c1-434b-41d2-b134-decdccef1700-serving-cert\") pod \"apiserver-9ddfb9f55-b8q59\" (UID: \"c42ec0c1-434b-41d2-b134-decdccef1700\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.509412 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/82eb9134-c59e-4bae-a74e-02ec60240232-mountpoint-dir\") pod \"csi-hostpathplugin-wrhpn\" (UID: \"82eb9134-c59e-4bae-a74e-02ec60240232\") " pod="hostpath-provisioner/csi-hostpathplugin-wrhpn" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.509426 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/82eb9134-c59e-4bae-a74e-02ec60240232-registration-dir\") pod \"csi-hostpathplugin-wrhpn\" (UID: \"82eb9134-c59e-4bae-a74e-02ec60240232\") " pod="hostpath-provisioner/csi-hostpathplugin-wrhpn" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.509478 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c42ec0c1-434b-41d2-b134-decdccef1700-etcd-client\") pod \"apiserver-9ddfb9f55-b8q59\" (UID: \"c42ec0c1-434b-41d2-b134-decdccef1700\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.509497 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4gpm\" (UniqueName: \"kubernetes.io/projected/82eb9134-c59e-4bae-a74e-02ec60240232-kube-api-access-s4gpm\") pod \"csi-hostpathplugin-wrhpn\" (UID: \"82eb9134-c59e-4bae-a74e-02ec60240232\") " pod="hostpath-provisioner/csi-hostpathplugin-wrhpn" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.509535 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/00219034-b44a-4db2-ad80-b04ff5eacac5-registry-certificates\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.509565 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b75d2eb4-efad-414a-8c7e-c64e0f83cb2b-metrics-certs\") pod \"router-default-68cf44c8b8-5pr7k\" (UID: \"b75d2eb4-efad-414a-8c7e-c64e0f83cb2b\") " pod="openshift-ingress/router-default-68cf44c8b8-5pr7k" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.509587 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7870c98b-5f29-4b36-9b11-e2564ff6bad7-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-jw44r\" (UID: \"7870c98b-5f29-4b36-9b11-e2564ff6bad7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jw44r" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.509633 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7870c98b-5f29-4b36-9b11-e2564ff6bad7-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-jw44r\" (UID: \"7870c98b-5f29-4b36-9b11-e2564ff6bad7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jw44r" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.509653 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b75d2eb4-efad-414a-8c7e-c64e0f83cb2b-service-ca-bundle\") pod \"router-default-68cf44c8b8-5pr7k\" (UID: \"b75d2eb4-efad-414a-8c7e-c64e0f83cb2b\") " pod="openshift-ingress/router-default-68cf44c8b8-5pr7k" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.509669 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c42ec0c1-434b-41d2-b134-decdccef1700-node-pullsecrets\") pod \"apiserver-9ddfb9f55-b8q59\" (UID: \"c42ec0c1-434b-41d2-b134-decdccef1700\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.509687 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4944256e-76fd-4652-80c6-a5f9217aadc3-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-g55fr\" (UID: \"4944256e-76fd-4652-80c6-a5f9217aadc3\") " pod="openshift-multus/cni-sysctl-allowlist-ds-g55fr" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.509738 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/00219034-b44a-4db2-ad80-b04ff5eacac5-bound-sa-token\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.509757 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f877fa69-0d40-4ecd-a63a-eba7bb0459e6-cert\") pod \"ingress-canary-4btxw\" (UID: \"f877fa69-0d40-4ecd-a63a-eba7bb0459e6\") " pod="openshift-ingress-canary/ingress-canary-4btxw" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.509772 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c42ec0c1-434b-41d2-b134-decdccef1700-image-import-ca\") pod \"apiserver-9ddfb9f55-b8q59\" (UID: \"c42ec0c1-434b-41d2-b134-decdccef1700\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.509828 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/00219034-b44a-4db2-ad80-b04ff5eacac5-trusted-ca\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.509845 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/82eb9134-c59e-4bae-a74e-02ec60240232-plugins-dir\") pod \"csi-hostpathplugin-wrhpn\" (UID: \"82eb9134-c59e-4bae-a74e-02ec60240232\") " pod="hostpath-provisioner/csi-hostpathplugin-wrhpn" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.511056 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/4944256e-76fd-4652-80c6-a5f9217aadc3-ready\") pod \"cni-sysctl-allowlist-ds-g55fr\" (UID: \"4944256e-76fd-4652-80c6-a5f9217aadc3\") " pod="openshift-multus/cni-sysctl-allowlist-ds-g55fr" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.511276 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c42ec0c1-434b-41d2-b134-decdccef1700-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-b8q59\" (UID: \"c42ec0c1-434b-41d2-b134-decdccef1700\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" Jan 22 14:15:59 crc kubenswrapper[5099]: E0122 14:15:59.511397 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:00.011375482 +0000 UTC m=+117.719125739 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.511899 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c42ec0c1-434b-41d2-b134-decdccef1700-audit\") pod \"apiserver-9ddfb9f55-b8q59\" (UID: \"c42ec0c1-434b-41d2-b134-decdccef1700\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.512702 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/00219034-b44a-4db2-ad80-b04ff5eacac5-ca-trust-extracted\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.536088 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7870c98b-5f29-4b36-9b11-e2564ff6bad7-config\") pod \"openshift-controller-manager-operator-686468bdd5-jw44r\" (UID: \"7870c98b-5f29-4b36-9b11-e2564ff6bad7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jw44r" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.537457 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c42ec0c1-434b-41d2-b134-decdccef1700-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-b8q59\" (UID: \"c42ec0c1-434b-41d2-b134-decdccef1700\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.537540 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4944256e-76fd-4652-80c6-a5f9217aadc3-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-g55fr\" (UID: \"4944256e-76fd-4652-80c6-a5f9217aadc3\") " pod="openshift-multus/cni-sysctl-allowlist-ds-g55fr" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.542275 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/00219034-b44a-4db2-ad80-b04ff5eacac5-registry-certificates\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.546032 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7870c98b-5f29-4b36-9b11-e2564ff6bad7-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-jw44r\" (UID: \"7870c98b-5f29-4b36-9b11-e2564ff6bad7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jw44r" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.547116 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c42ec0c1-434b-41d2-b134-decdccef1700-config\") pod \"apiserver-9ddfb9f55-b8q59\" (UID: \"c42ec0c1-434b-41d2-b134-decdccef1700\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.547115 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/00219034-b44a-4db2-ad80-b04ff5eacac5-registry-tls\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.548275 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c42ec0c1-434b-41d2-b134-decdccef1700-audit-dir\") pod \"apiserver-9ddfb9f55-b8q59\" (UID: \"c42ec0c1-434b-41d2-b134-decdccef1700\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.549232 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c42ec0c1-434b-41d2-b134-decdccef1700-image-import-ca\") pod \"apiserver-9ddfb9f55-b8q59\" (UID: \"c42ec0c1-434b-41d2-b134-decdccef1700\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.550116 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b75d2eb4-efad-414a-8c7e-c64e0f83cb2b-service-ca-bundle\") pod \"router-default-68cf44c8b8-5pr7k\" (UID: \"b75d2eb4-efad-414a-8c7e-c64e0f83cb2b\") " pod="openshift-ingress/router-default-68cf44c8b8-5pr7k" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.550923 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/00219034-b44a-4db2-ad80-b04ff5eacac5-trusted-ca\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.550974 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c42ec0c1-434b-41d2-b134-decdccef1700-node-pullsecrets\") pod \"apiserver-9ddfb9f55-b8q59\" (UID: \"c42ec0c1-434b-41d2-b134-decdccef1700\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.554987 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4944256e-76fd-4652-80c6-a5f9217aadc3-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-g55fr\" (UID: \"4944256e-76fd-4652-80c6-a5f9217aadc3\") " pod="openshift-multus/cni-sysctl-allowlist-ds-g55fr" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.563943 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/00219034-b44a-4db2-ad80-b04ff5eacac5-installation-pull-secrets\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.564635 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c42ec0c1-434b-41d2-b134-decdccef1700-serving-cert\") pod \"apiserver-9ddfb9f55-b8q59\" (UID: \"c42ec0c1-434b-41d2-b134-decdccef1700\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.579431 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b75d2eb4-efad-414a-8c7e-c64e0f83cb2b-default-certificate\") pod \"router-default-68cf44c8b8-5pr7k\" (UID: \"b75d2eb4-efad-414a-8c7e-c64e0f83cb2b\") " pod="openshift-ingress/router-default-68cf44c8b8-5pr7k" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.579815 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b75d2eb4-efad-414a-8c7e-c64e0f83cb2b-metrics-certs\") pod \"router-default-68cf44c8b8-5pr7k\" (UID: \"b75d2eb4-efad-414a-8c7e-c64e0f83cb2b\") " pod="openshift-ingress/router-default-68cf44c8b8-5pr7k" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.581312 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7870c98b-5f29-4b36-9b11-e2564ff6bad7-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-jw44r\" (UID: \"7870c98b-5f29-4b36-9b11-e2564ff6bad7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jw44r" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.581867 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b75d2eb4-efad-414a-8c7e-c64e0f83cb2b-stats-auth\") pod \"router-default-68cf44c8b8-5pr7k\" (UID: \"b75d2eb4-efad-414a-8c7e-c64e0f83cb2b\") " pod="openshift-ingress/router-default-68cf44c8b8-5pr7k" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.586827 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c42ec0c1-434b-41d2-b134-decdccef1700-etcd-client\") pod \"apiserver-9ddfb9f55-b8q59\" (UID: \"c42ec0c1-434b-41d2-b134-decdccef1700\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.588751 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztwtl\" (UniqueName: \"kubernetes.io/projected/00219034-b44a-4db2-ad80-b04ff5eacac5-kube-api-access-ztwtl\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.596501 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c42ec0c1-434b-41d2-b134-decdccef1700-encryption-config\") pod \"apiserver-9ddfb9f55-b8q59\" (UID: \"c42ec0c1-434b-41d2-b134-decdccef1700\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.609443 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k7g4m" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.614576 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/82eb9134-c59e-4bae-a74e-02ec60240232-plugins-dir\") pod \"csi-hostpathplugin-wrhpn\" (UID: \"82eb9134-c59e-4bae-a74e-02ec60240232\") " pod="hostpath-provisioner/csi-hostpathplugin-wrhpn" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.614645 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/82eb9134-c59e-4bae-a74e-02ec60240232-csi-data-dir\") pod \"csi-hostpathplugin-wrhpn\" (UID: \"82eb9134-c59e-4bae-a74e-02ec60240232\") " pod="hostpath-provisioner/csi-hostpathplugin-wrhpn" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.614693 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.614769 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/82eb9134-c59e-4bae-a74e-02ec60240232-socket-dir\") pod \"csi-hostpathplugin-wrhpn\" (UID: \"82eb9134-c59e-4bae-a74e-02ec60240232\") " pod="hostpath-provisioner/csi-hostpathplugin-wrhpn" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.614856 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/82eb9134-c59e-4bae-a74e-02ec60240232-mountpoint-dir\") pod \"csi-hostpathplugin-wrhpn\" (UID: \"82eb9134-c59e-4bae-a74e-02ec60240232\") " pod="hostpath-provisioner/csi-hostpathplugin-wrhpn" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.614877 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/82eb9134-c59e-4bae-a74e-02ec60240232-registration-dir\") pod \"csi-hostpathplugin-wrhpn\" (UID: \"82eb9134-c59e-4bae-a74e-02ec60240232\") " pod="hostpath-provisioner/csi-hostpathplugin-wrhpn" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.614913 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s4gpm\" (UniqueName: \"kubernetes.io/projected/82eb9134-c59e-4bae-a74e-02ec60240232-kube-api-access-s4gpm\") pod \"csi-hostpathplugin-wrhpn\" (UID: \"82eb9134-c59e-4bae-a74e-02ec60240232\") " pod="hostpath-provisioner/csi-hostpathplugin-wrhpn" Jan 22 14:15:59 crc kubenswrapper[5099]: E0122 14:15:59.620949 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:00.120915597 +0000 UTC m=+117.828665864 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.621492 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/82eb9134-c59e-4bae-a74e-02ec60240232-plugins-dir\") pod \"csi-hostpathplugin-wrhpn\" (UID: \"82eb9134-c59e-4bae-a74e-02ec60240232\") " pod="hostpath-provisioner/csi-hostpathplugin-wrhpn" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.621588 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/82eb9134-c59e-4bae-a74e-02ec60240232-socket-dir\") pod \"csi-hostpathplugin-wrhpn\" (UID: \"82eb9134-c59e-4bae-a74e-02ec60240232\") " pod="hostpath-provisioner/csi-hostpathplugin-wrhpn" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.621663 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/82eb9134-c59e-4bae-a74e-02ec60240232-registration-dir\") pod \"csi-hostpathplugin-wrhpn\" (UID: \"82eb9134-c59e-4bae-a74e-02ec60240232\") " pod="hostpath-provisioner/csi-hostpathplugin-wrhpn" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.621677 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/82eb9134-c59e-4bae-a74e-02ec60240232-mountpoint-dir\") pod \"csi-hostpathplugin-wrhpn\" (UID: \"82eb9134-c59e-4bae-a74e-02ec60240232\") " pod="hostpath-provisioner/csi-hostpathplugin-wrhpn" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.621756 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/82eb9134-c59e-4bae-a74e-02ec60240232-csi-data-dir\") pod \"csi-hostpathplugin-wrhpn\" (UID: \"82eb9134-c59e-4bae-a74e-02ec60240232\") " pod="hostpath-provisioner/csi-hostpathplugin-wrhpn" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.632421 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwstw\" (UniqueName: \"kubernetes.io/projected/4944256e-76fd-4652-80c6-a5f9217aadc3-kube-api-access-jwstw\") pod \"cni-sysctl-allowlist-ds-g55fr\" (UID: \"4944256e-76fd-4652-80c6-a5f9217aadc3\") " pod="openshift-multus/cni-sysctl-allowlist-ds-g55fr" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.636797 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlvnj\" (UniqueName: \"kubernetes.io/projected/f877fa69-0d40-4ecd-a63a-eba7bb0459e6-kube-api-access-xlvnj\") pod \"ingress-canary-4btxw\" (UID: \"f877fa69-0d40-4ecd-a63a-eba7bb0459e6\") " pod="openshift-ingress-canary/ingress-canary-4btxw" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.636827 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhbw9\" (UniqueName: \"kubernetes.io/projected/c42ec0c1-434b-41d2-b134-decdccef1700-kube-api-access-dhbw9\") pod \"apiserver-9ddfb9f55-b8q59\" (UID: \"c42ec0c1-434b-41d2-b134-decdccef1700\") " pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.640906 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-g55fr" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.659104 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxgbh\" (UniqueName: \"kubernetes.io/projected/b75d2eb4-efad-414a-8c7e-c64e0f83cb2b-kube-api-access-nxgbh\") pod \"router-default-68cf44c8b8-5pr7k\" (UID: \"b75d2eb4-efad-414a-8c7e-c64e0f83cb2b\") " pod="openshift-ingress/router-default-68cf44c8b8-5pr7k" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.659992 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f877fa69-0d40-4ecd-a63a-eba7bb0459e6-cert\") pod \"ingress-canary-4btxw\" (UID: \"f877fa69-0d40-4ecd-a63a-eba7bb0459e6\") " pod="openshift-ingress-canary/ingress-canary-4btxw" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.666403 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-9g2zx" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.678532 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4btxw" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.720038 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/00219034-b44a-4db2-ad80-b04ff5eacac5-bound-sa-token\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.734917 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxbtc\" (UniqueName: \"kubernetes.io/projected/7870c98b-5f29-4b36-9b11-e2564ff6bad7-kube-api-access-vxbtc\") pod \"openshift-controller-manager-operator-686468bdd5-jw44r\" (UID: \"7870c98b-5f29-4b36-9b11-e2564ff6bad7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jw44r" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.735620 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:15:59 crc kubenswrapper[5099]: E0122 14:15:59.736004 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:00.235985982 +0000 UTC m=+117.943736219 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.753893 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4gpm\" (UniqueName: \"kubernetes.io/projected/82eb9134-c59e-4bae-a74e-02ec60240232-kube-api-access-s4gpm\") pod \"csi-hostpathplugin-wrhpn\" (UID: \"82eb9134-c59e-4bae-a74e-02ec60240232\") " pod="hostpath-provisioner/csi-hostpathplugin-wrhpn" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.767612 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k7dg6" podStartSLOduration=95.76759631 podStartE2EDuration="1m35.76759631s" podCreationTimestamp="2026-01-22 14:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:15:59.537514102 +0000 UTC m=+117.245264339" watchObservedRunningTime="2026-01-22 14:15:59.76759631 +0000 UTC m=+117.475346547" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.782656 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-rf4fl" podStartSLOduration=95.782636189 podStartE2EDuration="1m35.782636189s" podCreationTimestamp="2026-01-22 14:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:15:59.76869481 +0000 UTC m=+117.476445047" watchObservedRunningTime="2026-01-22 14:15:59.782636189 +0000 UTC m=+117.490386426" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.783988 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rj8rm"] Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.805927 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jw44r" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.851581 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.851844 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" Jan 22 14:15:59 crc kubenswrapper[5099]: E0122 14:15:59.853496 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:00.353475453 +0000 UTC m=+118.061225690 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.917307 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-8cgs9" podStartSLOduration=96.917290386 podStartE2EDuration="1m36.917290386s" podCreationTimestamp="2026-01-22 14:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:15:59.916877485 +0000 UTC m=+117.624627722" watchObservedRunningTime="2026-01-22 14:15:59.917290386 +0000 UTC m=+117.625040623" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.926866 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-5pr7k" Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.949279 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-j94jx" event={"ID":"0a2c820d-64d7-48dc-845e-6fea4213ccbe","Type":"ContainerStarted","Data":"24a3ffe6655e2b83c03ac7c3857bb0553795c809f490b658c6a5edb3d17eedcc"} Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.954082 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:15:59 crc kubenswrapper[5099]: E0122 14:15:59.954447 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:00.454430095 +0000 UTC m=+118.162180332 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:15:59 crc kubenswrapper[5099]: I0122 14:15:59.985046 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-wrhpn" Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.035352 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-jpclx" event={"ID":"46d7f2bf-b38a-4bea-ba85-211ffe114151","Type":"ContainerStarted","Data":"992fd7defb3a8f0e448a1145c384bde770ad50d433ac209e617e0b3315ba5538"} Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.048396 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-dz5c2" event={"ID":"e31d46d0-1fc7-4e35-9e8a-6f8033388332","Type":"ContainerStarted","Data":"1aad9c5fe9cc9e76f1d35bcc669c773af8ca8df48478bd7225186a329d18b311"} Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.048474 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-t6hxb"] Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.055927 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:00 crc kubenswrapper[5099]: E0122 14:16:00.056272 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:00.556258829 +0000 UTC m=+118.264009066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.065273 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-v7shj"] Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.080351 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-q6pjq" podStartSLOduration=96.080323854 podStartE2EDuration="1m36.080323854s" podCreationTimestamp="2026-01-22 14:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:00.078654417 +0000 UTC m=+117.786404654" watchObservedRunningTime="2026-01-22 14:16:00.080323854 +0000 UTC m=+117.788074091" Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.080385 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zdwjf" event={"ID":"8c8db120-c7b7-4a59-933e-9eeda52f3a7d","Type":"ContainerStarted","Data":"f3dd02e34307cf8a05c0160985d8c3fcae32ba9c735f77f3f8279c070deedc80"} Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.091758 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-tgcgt"] Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.093903 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-mvtfg" event={"ID":"01a5601d-43bd-4873-ad86-10e225b7c31b","Type":"ContainerStarted","Data":"007ddc769cae6f71ab75de11212aef40ec315194d372deeb89aaffbc09d232ae"} Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.108416 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-g87pb" podStartSLOduration=96.108395586 podStartE2EDuration="1m36.108395586s" podCreationTimestamp="2026-01-22 14:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:00.107187613 +0000 UTC m=+117.814937870" watchObservedRunningTime="2026-01-22 14:16:00.108395586 +0000 UTC m=+117.816145823" Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.136119 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-rrp2m" event={"ID":"bfd2c2fb-9eff-419e-a57c-e8b0c4d73981","Type":"ContainerStarted","Data":"b9f237fba48c6bc2aa3903c0d0f699cff75dff848533c8e9de2c3c4d04309866"} Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.156610 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:00 crc kubenswrapper[5099]: E0122 14:16:00.158306 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:00.65827515 +0000 UTC m=+118.366025387 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.187682 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-jvzxb" event={"ID":"ee755aa6-a943-4869-a688-c0da5d38aafa","Type":"ContainerStarted","Data":"8c99c1dce0349889ba0640ff827fc37b6776a06d966256cd7ed3d43f46db58d8"} Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.205659 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-flxrl" podStartSLOduration=96.205637336 podStartE2EDuration="1m36.205637336s" podCreationTimestamp="2026-01-22 14:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:00.204391302 +0000 UTC m=+117.912141539" watchObservedRunningTime="2026-01-22 14:16:00.205637336 +0000 UTC m=+117.913387573" Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.208117 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-gflg8" event={"ID":"e07e6e00-cfcc-4513-b231-8d27833d8687","Type":"ContainerStarted","Data":"dabf39aee967c0a1fd0d74e913fcca648b7b810246e13b227d09447799768c90"} Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.212234 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-xg9k5" event={"ID":"20945218-8d75-4f6e-ac07-6815b888e9b7","Type":"ContainerStarted","Data":"7bb3edd0e624d23ec881267a6a30a41729f5ca8b3d86118cd15d0e3a93fd8386"} Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.229011 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-v9n8v" podStartSLOduration=96.228995491 podStartE2EDuration="1m36.228995491s" podCreationTimestamp="2026-01-22 14:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:00.228645882 +0000 UTC m=+117.936396119" watchObservedRunningTime="2026-01-22 14:16:00.228995491 +0000 UTC m=+117.936745718" Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.217303 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-qcmvq"] Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.239965 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-b798r" event={"ID":"ea2249f7-4927-4920-9ce2-aaa3cc5749ba","Type":"ContainerStarted","Data":"00e021014536801ada174a6b113e975333c3eaaa6507555bf345dc6b42e20dc7"} Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.258938 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:00 crc kubenswrapper[5099]: E0122 14:16:00.259313 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:00.759300333 +0000 UTC m=+118.467050570 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.290402 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-9nglq" event={"ID":"ad87e7e8-19c1-4c92-9400-9873a85e80b4","Type":"ContainerStarted","Data":"ba25363c2d79aba62c98e32ef1cfe3eab986f26922f39054acafbb8fa7cbe6f4"} Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.299999 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-rwsnz" event={"ID":"d187979f-bd07-4330-bc66-d4e12f068dda","Type":"ContainerStarted","Data":"1739dcb81befb5e1d012b39c94043b0f8e7f2451fb1f4eddb8d009fa4d8d853b"} Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.311188 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-9xszn" podStartSLOduration=97.311151361 podStartE2EDuration="1m37.311151361s" podCreationTimestamp="2026-01-22 14:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:00.304247265 +0000 UTC m=+118.011997502" watchObservedRunningTime="2026-01-22 14:16:00.311151361 +0000 UTC m=+118.018901598" Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.328359 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-zrclr" event={"ID":"3e9d436d-36cf-4f6b-bbd3-6f1931e7228c","Type":"ContainerStarted","Data":"5ece44d92779bdec055a076befeb1bca1259140ac82dcd1d5a670d478885d8dc"} Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.338373 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-67tlp" event={"ID":"d69099b1-1e3d-4007-b1d1-039d6df91bb7","Type":"ContainerStarted","Data":"8baba0ca28a434358dc14100b24a20bdd55755d9453511e6bfea23a42ee19a11"} Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.340915 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-67tlp" Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.342278 5099 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-67tlp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": dial tcp 10.217.0.22:5443: connect: connection refused" start-of-body= Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.342356 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-67tlp" podUID="d69099b1-1e3d-4007-b1d1-039d6df91bb7" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": dial tcp 10.217.0.22:5443: connect: connection refused" Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.360253 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:00 crc kubenswrapper[5099]: E0122 14:16:00.365115 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:00.865073906 +0000 UTC m=+118.572824143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.367921 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.370695 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-w89ff" event={"ID":"ec74add6-7b16-4c96-aee1-336b53788c2a","Type":"ContainerStarted","Data":"25ee43f1a80260e5a1a4909becaba5d96d260f522494eeffd35de10dabf799eb"} Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.373330 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-ldzlj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.373415 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-ldzlj" podUID="513664b2-32c9-4290-9ae7-2400a1c4da84" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Jan 22 14:16:00 crc kubenswrapper[5099]: E0122 14:16:00.375598 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:00.8755739 +0000 UTC m=+118.583324307 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.401634 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-9xszn" Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.489650 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:00 crc kubenswrapper[5099]: E0122 14:16:00.490034 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:00.990004638 +0000 UTC m=+118.697754875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.490731 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.493540 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-8cgs9" Jan 22 14:16:00 crc kubenswrapper[5099]: E0122 14:16:00.496967 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:00.996952937 +0000 UTC m=+118.704703174 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.547910 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-84v6d" podStartSLOduration=96.547680714 podStartE2EDuration="1m36.547680714s" podCreationTimestamp="2026-01-22 14:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:00.515371127 +0000 UTC m=+118.223121364" watchObservedRunningTime="2026-01-22 14:16:00.547680714 +0000 UTC m=+118.255430951" Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.553792 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-ldzlj" podStartSLOduration=97.55376861 podStartE2EDuration="1m37.55376861s" podCreationTimestamp="2026-01-22 14:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:00.543759049 +0000 UTC m=+118.251509286" watchObservedRunningTime="2026-01-22 14:16:00.55376861 +0000 UTC m=+118.261518847" Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.554748 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4btxw"] Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.591956 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:00 crc kubenswrapper[5099]: E0122 14:16:00.592738 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:01.092711878 +0000 UTC m=+118.800462115 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.694101 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:00 crc kubenswrapper[5099]: E0122 14:16:00.698597 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:01.198579732 +0000 UTC m=+118.906329969 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.755108 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k7dg6" Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.836851 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:00 crc kubenswrapper[5099]: E0122 14:16:00.841087 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:01.341015251 +0000 UTC m=+119.048765488 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.863181 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-jvzxb" podStartSLOduration=96.863135222 podStartE2EDuration="1m36.863135222s" podCreationTimestamp="2026-01-22 14:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:00.863024989 +0000 UTC m=+118.570775256" watchObservedRunningTime="2026-01-22 14:16:00.863135222 +0000 UTC m=+118.570885469" Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.905738 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-w89ff" podStartSLOduration=96.905702777 podStartE2EDuration="1m36.905702777s" podCreationTimestamp="2026-01-22 14:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:00.891241875 +0000 UTC m=+118.598992122" watchObservedRunningTime="2026-01-22 14:16:00.905702777 +0000 UTC m=+118.613453014" Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.925112 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-67tlp" podStartSLOduration=96.925092664 podStartE2EDuration="1m36.925092664s" podCreationTimestamp="2026-01-22 14:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:00.924878418 +0000 UTC m=+118.632628655" watchObservedRunningTime="2026-01-22 14:16:00.925092664 +0000 UTC m=+118.632842901" Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.943461 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:00 crc kubenswrapper[5099]: E0122 14:16:00.943838 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:01.443821802 +0000 UTC m=+119.151572039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.955132 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-jpclx" podStartSLOduration=96.955105578 podStartE2EDuration="1m36.955105578s" podCreationTimestamp="2026-01-22 14:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:00.952712474 +0000 UTC m=+118.660462711" watchObservedRunningTime="2026-01-22 14:16:00.955105578 +0000 UTC m=+118.662855815" Jan 22 14:16:00 crc kubenswrapper[5099]: I0122 14:16:00.977638 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-j94jx" podStartSLOduration=60.97761667 podStartE2EDuration="1m0.97761667s" podCreationTimestamp="2026-01-22 14:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:00.971873674 +0000 UTC m=+118.679623911" watchObservedRunningTime="2026-01-22 14:16:00.97761667 +0000 UTC m=+118.685366907" Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.024545 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-rrp2m" podStartSLOduration=97.024524524 podStartE2EDuration="1m37.024524524s" podCreationTimestamp="2026-01-22 14:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:01.02364087 +0000 UTC m=+118.731391107" watchObservedRunningTime="2026-01-22 14:16:01.024524524 +0000 UTC m=+118.732274761" Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.044834 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:01 crc kubenswrapper[5099]: E0122 14:16:01.045489 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:01.545462323 +0000 UTC m=+119.253212560 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.146719 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:01 crc kubenswrapper[5099]: E0122 14:16:01.150700 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:01.65068311 +0000 UTC m=+119.358433347 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.241334 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-b8q59"] Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.251249 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:01 crc kubenswrapper[5099]: E0122 14:16:01.251877 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:01.751853047 +0000 UTC m=+119.459603284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.280806 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k7g4m"] Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.347820 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-wrhpn"] Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.355025 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:01 crc kubenswrapper[5099]: E0122 14:16:01.355457 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:01.85544159 +0000 UTC m=+119.563191827 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.442023 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jw44r"] Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.458076 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:01 crc kubenswrapper[5099]: E0122 14:16:01.470798 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:01.970748682 +0000 UTC m=+119.678498919 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.508564 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" event={"ID":"7894c17b-6de7-426e-b27a-4834b7186e8f","Type":"ContainerStarted","Data":"22cc0ab38c818316cb48bae6f0f671fff417bf465bf9a69bf87be79b48cebfab"} Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.509906 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.545879 5099 generic.go:358] "Generic (PLEG): container finished" podID="0a2c820d-64d7-48dc-845e-6fea4213ccbe" containerID="24a3ffe6655e2b83c03ac7c3857bb0553795c809f490b658c6a5edb3d17eedcc" exitCode=0 Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.546094 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-j94jx" event={"ID":"0a2c820d-64d7-48dc-845e-6fea4213ccbe","Type":"ContainerDied","Data":"24a3ffe6655e2b83c03ac7c3857bb0553795c809f490b658c6a5edb3d17eedcc"} Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.563925 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:01 crc kubenswrapper[5099]: E0122 14:16:01.564468 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:02.064455947 +0000 UTC m=+119.772206184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.566800 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" podStartSLOduration=97.566779679 podStartE2EDuration="1m37.566779679s" podCreationTimestamp="2026-01-22 14:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:01.545753558 +0000 UTC m=+119.253503795" watchObservedRunningTime="2026-01-22 14:16:01.566779679 +0000 UTC m=+119.274529916" Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.576798 5099 generic.go:358] "Generic (PLEG): container finished" podID="f258327b-5062-4466-bd24-cc22c2f56087" containerID="230851d3d639ab16c3df54e0064ac760af9b9ed886259672a09dd5b8236623f4" exitCode=0 Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.576908 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-cm8k9" event={"ID":"f258327b-5062-4466-bd24-cc22c2f56087","Type":"ContainerDied","Data":"230851d3d639ab16c3df54e0064ac760af9b9ed886259672a09dd5b8236623f4"} Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.594927 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-rrp2m" event={"ID":"bfd2c2fb-9eff-419e-a57c-e8b0c4d73981","Type":"ContainerStarted","Data":"4544f5c99d6808730168929a1f9f7e5390bf9d97e1bd0f1a4e373edb7163e782"} Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.634393 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-5pr7k" event={"ID":"b75d2eb4-efad-414a-8c7e-c64e0f83cb2b","Type":"ContainerStarted","Data":"e61416212ba33cb2e249ef0b497cf580fe2729e749aa6e7f3aec9f2998750683"} Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.644074 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k7g4m" event={"ID":"5c0223c3-0295-4e25-91af-f648e4b081b8","Type":"ContainerStarted","Data":"0191820f394233545e9269be117b5b1aae4738fe32d11e6bd9f2f2e35bc4ce04"} Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.659527 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-9g2zx" event={"ID":"7bb9dbcc-e884-49d5-9f83-751acd16b0e5","Type":"ContainerStarted","Data":"2dbbace750e9ce4a81afa7550fe1130c2a8898ee80fc9a9ce9066e01fb4e8f25"} Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.665094 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:01 crc kubenswrapper[5099]: E0122 14:16:01.668511 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:02.168484592 +0000 UTC m=+119.876234869 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.708988 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-g55fr" event={"ID":"4944256e-76fd-4652-80c6-a5f9217aadc3","Type":"ContainerStarted","Data":"788ce4c84386b1303573e7749da6b949dc86c3897ab7a00b3d6af877e6aca734"} Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.766460 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-tgcgt" event={"ID":"70535248-a215-4c17-bae6-b193309b6ab3","Type":"ContainerStarted","Data":"20fcf7611fa1629e4aba160c6d7d7672276b573f9fe906e51f44b8c87c48bda5"} Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.775542 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.775663 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" event={"ID":"c42ec0c1-434b-41d2-b134-decdccef1700","Type":"ContainerStarted","Data":"dfc741cf1de956b681f6c2ee358778dd54b69e2483d0741e490dd8d080f96ce4"} Jan 22 14:16:01 crc kubenswrapper[5099]: E0122 14:16:01.776199 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:02.276179156 +0000 UTC m=+119.983929393 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.784636 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4btxw" event={"ID":"f877fa69-0d40-4ecd-a63a-eba7bb0459e6","Type":"ContainerStarted","Data":"b99246e6f1773eaf8176897f49b8aea0d2f8d525e283d8ffb62acaa4a00aa465"} Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.787672 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rj8rm" event={"ID":"4ce76424-f6a9-4f4d-8cea-7935ccf75fca","Type":"ContainerStarted","Data":"560f5a7bdc99e009621e0d37f72de4263534a160c1e19ad6df2f0f7ecb9580dd"} Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.791609 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-b798r" event={"ID":"ea2249f7-4927-4920-9ce2-aaa3cc5749ba","Type":"ContainerStarted","Data":"5697f0bb63292db0a810756f3a263baa05c3623a07dbac952eacae913ff92b25"} Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.792885 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-b798r" Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.802860 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-9nglq" event={"ID":"ad87e7e8-19c1-4c92-9400-9873a85e80b4","Type":"ContainerStarted","Data":"e0d647cdd912e67a0310f0285415a7dc9ed9f6f36a2846b251cb651ea8016a0c"} Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.804313 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-9nglq" Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.805957 5099 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-9nglq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.806009 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-9nglq" podUID="ad87e7e8-19c1-4c92-9400-9873a85e80b4" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.814611 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-rwsnz" event={"ID":"d187979f-bd07-4330-bc66-d4e12f068dda","Type":"ContainerStarted","Data":"96a7252b6d53ffb5cf8d5f3a93f065623e8662246a4bdf44e99bbfa7e8c2e5cb"} Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.831413 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-67tlp" event={"ID":"d69099b1-1e3d-4007-b1d1-039d6df91bb7","Type":"ContainerStarted","Data":"00ef2f9588293ac483628a72bd665646433f19e2ffa70c27d4996125f5574e25"} Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.851965 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-b798r" podStartSLOduration=97.851934434 podStartE2EDuration="1m37.851934434s" podCreationTimestamp="2026-01-22 14:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:01.850842074 +0000 UTC m=+119.558592311" watchObservedRunningTime="2026-01-22 14:16:01.851934434 +0000 UTC m=+119.559684671" Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.858348 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qcmvq" event={"ID":"83de5641-52fb-4052-b0fc-28e02eef5f3a","Type":"ContainerStarted","Data":"d47d87b81df6d12059913890023c9008de48a8b25ed561531e689f36d57d9a24"} Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.879399 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:01 crc kubenswrapper[5099]: E0122 14:16:01.881148 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:02.381122756 +0000 UTC m=+120.088872993 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.896466 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-rzgsc" event={"ID":"8efd2ae7-b5de-4a8a-beb9-fcc7adcb2241","Type":"ContainerStarted","Data":"4993d4c2f589adf8bccd72541497f6bd07d5d5d5fa6701250def16d55c990141"} Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.917389 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-rwsnz" podStartSLOduration=97.917365721 podStartE2EDuration="1m37.917365721s" podCreationTimestamp="2026-01-22 14:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:01.881832485 +0000 UTC m=+119.589582722" watchObservedRunningTime="2026-01-22 14:16:01.917365721 +0000 UTC m=+119.625115958" Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.920863 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-v7shj" event={"ID":"ec31b794-4b87-452d-8e77-14a8000d6812","Type":"ContainerStarted","Data":"87c06a369745ff220b4705595fb2d6ab3725148d1430e93316f1f6d182680344"} Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.945833 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-2pf7j" event={"ID":"65c18850-3f68-4603-955b-12a4ab882766","Type":"ContainerStarted","Data":"df732ad1887828ade3ea8f06c84a86c2edd3b01c5ebfb062643d5da00bbc1edf"} Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.947098 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-rzgsc" podStartSLOduration=97.947036076 podStartE2EDuration="1m37.947036076s" podCreationTimestamp="2026-01-22 14:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:01.945464254 +0000 UTC m=+119.653214491" watchObservedRunningTime="2026-01-22 14:16:01.947036076 +0000 UTC m=+119.654786333" Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.951857 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-9nglq" podStartSLOduration=97.951836016 podStartE2EDuration="1m37.951836016s" podCreationTimestamp="2026-01-22 14:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:01.919712964 +0000 UTC m=+119.627463201" watchObservedRunningTime="2026-01-22 14:16:01.951836016 +0000 UTC m=+119.659586253" Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.959019 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-2pf7j" Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.984501 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:01 crc kubenswrapper[5099]: E0122 14:16:01.985150 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:02.485132941 +0000 UTC m=+120.192883178 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:01 crc kubenswrapper[5099]: I0122 14:16:01.997493 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-2pf7j" podStartSLOduration=98.997475736 podStartE2EDuration="1m38.997475736s" podCreationTimestamp="2026-01-22 14:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:01.995237985 +0000 UTC m=+119.702988222" watchObservedRunningTime="2026-01-22 14:16:01.997475736 +0000 UTC m=+119.705225993" Jan 22 14:16:02 crc kubenswrapper[5099]: I0122 14:16:02.085155 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:02 crc kubenswrapper[5099]: E0122 14:16:02.085366 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:02.585325402 +0000 UTC m=+120.293075639 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:02 crc kubenswrapper[5099]: I0122 14:16:02.087721 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:02 crc kubenswrapper[5099]: E0122 14:16:02.088202 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:02.58818667 +0000 UTC m=+120.295936907 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:02 crc kubenswrapper[5099]: I0122 14:16:02.132241 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:16:02 crc kubenswrapper[5099]: I0122 14:16:02.211800 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:02 crc kubenswrapper[5099]: E0122 14:16:02.212244 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:02.712225927 +0000 UTC m=+120.419976164 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:02 crc kubenswrapper[5099]: I0122 14:16:02.314648 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:02 crc kubenswrapper[5099]: E0122 14:16:02.315091 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:02.815073041 +0000 UTC m=+120.522823278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:02 crc kubenswrapper[5099]: I0122 14:16:02.415538 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:02 crc kubenswrapper[5099]: E0122 14:16:02.415874 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:02.915857298 +0000 UTC m=+120.623607535 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:02 crc kubenswrapper[5099]: I0122 14:16:02.519333 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:02 crc kubenswrapper[5099]: E0122 14:16:02.519800 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:03.019777189 +0000 UTC m=+120.727527426 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:02 crc kubenswrapper[5099]: I0122 14:16:02.638882 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:02 crc kubenswrapper[5099]: E0122 14:16:02.645448 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:03.139952433 +0000 UTC m=+120.847702670 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:02 crc kubenswrapper[5099]: I0122 14:16:02.742952 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:02 crc kubenswrapper[5099]: E0122 14:16:02.743542 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:03.243522755 +0000 UTC m=+120.951272992 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:02 crc kubenswrapper[5099]: I0122 14:16:02.832848 5099 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-67tlp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 14:16:02 crc kubenswrapper[5099]: I0122 14:16:02.833239 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-67tlp" podUID="d69099b1-1e3d-4007-b1d1-039d6df91bb7" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 14:16:02 crc kubenswrapper[5099]: I0122 14:16:02.844570 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:02 crc kubenswrapper[5099]: E0122 14:16:02.845354 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:03.345328769 +0000 UTC m=+121.053079006 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:02 crc kubenswrapper[5099]: I0122 14:16:02.947444 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:02 crc kubenswrapper[5099]: E0122 14:16:02.947829 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:03.447813133 +0000 UTC m=+121.155563370 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:02 crc kubenswrapper[5099]: I0122 14:16:02.990475 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-xg9k5" event={"ID":"20945218-8d75-4f6e-ac07-6815b888e9b7","Type":"ContainerStarted","Data":"91bef1205ca4dca201a6e33da20eab5ecac90cf62008f6635b716d230701e225"} Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.051314 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:03 crc kubenswrapper[5099]: E0122 14:16:03.051445 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:03.551411266 +0000 UTC m=+121.259161493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.053035 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:03 crc kubenswrapper[5099]: E0122 14:16:03.055430 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:03.555420414 +0000 UTC m=+121.263170731 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.154586 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:03 crc kubenswrapper[5099]: E0122 14:16:03.155401 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:03.655364049 +0000 UTC m=+121.363114296 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.167642 5099 generic.go:358] "Generic (PLEG): container finished" podID="c42ec0c1-434b-41d2-b134-decdccef1700" containerID="b756ba63985b93212e21fc4a437515976fdae10902f4bdebd534b635689bb4e3" exitCode=0 Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.168017 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" event={"ID":"c42ec0c1-434b-41d2-b134-decdccef1700","Type":"ContainerDied","Data":"b756ba63985b93212e21fc4a437515976fdae10902f4bdebd534b635689bb4e3"} Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.179616 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qcmvq" event={"ID":"83de5641-52fb-4052-b0fc-28e02eef5f3a","Type":"ContainerStarted","Data":"2506253bd4b5213fd8620fb98dfaa9b0033209f054c30e71765665b6b0952e4b"} Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.181242 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-t6hxb" event={"ID":"d12cb731-4816-42a2-9a62-84a8695ece7e","Type":"ContainerStarted","Data":"5c458d21d2978bfb0a2373086279a615c041352900208403ae3fa6a2b2056136"} Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.181277 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-t6hxb" event={"ID":"d12cb731-4816-42a2-9a62-84a8695ece7e","Type":"ContainerStarted","Data":"caf90ea8d14944419c001dfd90173b0ff1c500d05e6a6a759b9c31a6ce45fa62"} Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.181290 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-t6hxb" event={"ID":"d12cb731-4816-42a2-9a62-84a8695ece7e","Type":"ContainerStarted","Data":"b22df9693f082acaf16bcfc2d4310b201f76c14f6357a9a79cc312cddea0f457"} Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.183016 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-gflg8" event={"ID":"e07e6e00-cfcc-4513-b231-8d27833d8687","Type":"ContainerStarted","Data":"504feff761ff6746fb019a2b16e960f4f2b7dc0564d83a5a9fe5c25238415ead"} Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.184462 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-g55fr" event={"ID":"4944256e-76fd-4652-80c6-a5f9217aadc3","Type":"ContainerStarted","Data":"2fac788e02f10ec85576f9fffa7ba022941fc907ea45deb0ffaacd11fcdcbfc4"} Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.189900 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-5pr7k" event={"ID":"b75d2eb4-efad-414a-8c7e-c64e0f83cb2b","Type":"ContainerStarted","Data":"70fa8e49784a0810952e0bc580ea267784290f63067b67968d6c830cd21b3db7"} Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.192556 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-xg9k5" podStartSLOduration=99.192529318 podStartE2EDuration="1m39.192529318s" podCreationTimestamp="2026-01-22 14:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:03.190665487 +0000 UTC m=+120.898415734" watchObservedRunningTime="2026-01-22 14:16:03.192529318 +0000 UTC m=+120.900279555" Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.205057 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zdwjf" event={"ID":"8c8db120-c7b7-4a59-933e-9eeda52f3a7d","Type":"ContainerStarted","Data":"8bd6a50a945b4f72bb92a0103165c5b59a4045f45257e08f09665c2df61b8a63"} Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.205114 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zdwjf" event={"ID":"8c8db120-c7b7-4a59-933e-9eeda52f3a7d","Type":"ContainerStarted","Data":"7cc76cd136c6b7683b71d57f5e8af3b77cd44862d6061599a8baa1fe3d282413"} Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.208425 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4btxw" event={"ID":"f877fa69-0d40-4ecd-a63a-eba7bb0459e6","Type":"ContainerStarted","Data":"2414623f361173da9f4239c84ad83b322f584f3e5e91c0ef4443e7fe047cf2d1"} Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.217823 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-zrclr" event={"ID":"3e9d436d-36cf-4f6b-bbd3-6f1931e7228c","Type":"ContainerStarted","Data":"195b4c77577f6cebd0473e951b11bd573fc04f24ab82fd6391bf166ae111473f"} Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.217884 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-zrclr" event={"ID":"3e9d436d-36cf-4f6b-bbd3-6f1931e7228c","Type":"ContainerStarted","Data":"8a0e06a85f428247e13ca68fbb19894ab165fca9b71fe482dd488cbc9e21e184"} Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.221716 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.255980 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:03 crc kubenswrapper[5099]: E0122 14:16:03.256361 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:03.756348631 +0000 UTC m=+121.464098868 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.358115 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:03 crc kubenswrapper[5099]: E0122 14:16:03.358405 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:03.858357892 +0000 UTC m=+121.566108129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.359748 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:03 crc kubenswrapper[5099]: E0122 14:16:03.360416 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:03.860403447 +0000 UTC m=+121.568153684 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.460849 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:03 crc kubenswrapper[5099]: E0122 14:16:03.461546 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:03.961527113 +0000 UTC m=+121.669277350 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.554297 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-v7shj" event={"ID":"ec31b794-4b87-452d-8e77-14a8000d6812","Type":"ContainerStarted","Data":"effee80be212014ffccdb702703c8db226c65d3edc1ad88ec235564ee1adf588"} Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.554338 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-v7shj" event={"ID":"ec31b794-4b87-452d-8e77-14a8000d6812","Type":"ContainerStarted","Data":"067114f879076a65edb748c1d218043e4bda4df6caab1a91adcf069d071658fa"} Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.554349 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jw44r" event={"ID":"7870c98b-5f29-4b36-9b11-e2564ff6bad7","Type":"ContainerStarted","Data":"61a2b7b3f39fb29336288f889af9ed97fb3f13e006765ec40477b4fa83c8e6ad"} Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.554371 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-g55fr" Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.554388 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.554403 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jw44r" event={"ID":"7870c98b-5f29-4b36-9b11-e2564ff6bad7","Type":"ContainerStarted","Data":"46296697c1212a74a051013b098d11afaad9dbc8090969c2a28350731d0d0cde"} Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.554414 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-mvtfg" event={"ID":"01a5601d-43bd-4873-ad86-10e225b7c31b","Type":"ContainerStarted","Data":"576b1b7efd92ec177b4c2ecb719373a085f41da578f08410ca953712bf67fbe6"} Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.554434 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-dz5c2" event={"ID":"e31d46d0-1fc7-4e35-9e8a-6f8033388332","Type":"ContainerStarted","Data":"aac70fd750f5bba62fbd0052f4411de572adc09c119bdba4b4498691cd3414e8"} Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.554447 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-cm8k9" event={"ID":"f258327b-5062-4466-bd24-cc22c2f56087","Type":"ContainerStarted","Data":"0106091b15e80fe93cc126762333ab5f2fd30b9d2ead9e0c296cdf989c297677"} Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.554461 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k7g4m" event={"ID":"5c0223c3-0295-4e25-91af-f648e4b081b8","Type":"ContainerStarted","Data":"1176dfed98ef087709ca2d45201045a2d58559f70a58ec618965e754a783a73b"} Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.554473 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wrhpn" event={"ID":"82eb9134-c59e-4bae-a74e-02ec60240232","Type":"ContainerStarted","Data":"11d0598ff3e653b878ec989155b193f4f96eb4891653f15a2f32d7cb8181c49d"} Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.554488 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-9g2zx" event={"ID":"7bb9dbcc-e884-49d5-9f83-751acd16b0e5","Type":"ContainerStarted","Data":"4257de07196db8de7e4db0cfe1f312dc7254da6f20f87812560ad13fde0ec460"} Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.554501 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-tgcgt" event={"ID":"70535248-a215-4c17-bae6-b193309b6ab3","Type":"ContainerStarted","Data":"8c141229ffdda0ec839c4f6d9626effe539363d00cc5590f7c89f6de9c6ced2b"} Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.554513 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rj8rm" event={"ID":"4ce76424-f6a9-4f4d-8cea-7935ccf75fca","Type":"ContainerStarted","Data":"19365095b4fdb4a1c060c2c44d1e4233d6b2655058ab5a0536af204524425c0a"} Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.554626 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.559854 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-tgcgt" Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.561674 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rj8rm" Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.566099 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.566524 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82aebf1f-ffc4-46c5-a8b3-2873c83b39bc-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"82aebf1f-ffc4-46c5-a8b3-2873c83b39bc\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.566568 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/82aebf1f-ffc4-46c5-a8b3-2873c83b39bc-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"82aebf1f-ffc4-46c5-a8b3-2873c83b39bc\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.581031 5099 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-rj8rm container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.581241 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rj8rm" podUID="4ce76424-f6a9-4f4d-8cea-7935ccf75fca" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.581623 5099 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-tgcgt container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.581712 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-tgcgt" podUID="70535248-a215-4c17-bae6-b193309b6ab3" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.581623 5099 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-9nglq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.581789 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-9nglq" podUID="ad87e7e8-19c1-4c92-9400-9873a85e80b4" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.583009 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.585039 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Jan 22 14:16:03 crc kubenswrapper[5099]: E0122 14:16:03.589313 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:04.089296073 +0000 UTC m=+121.797046310 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.589589 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-2pf7j" Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.589795 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-67tlp" Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.611727 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zdwjf" podStartSLOduration=99.611700812 podStartE2EDuration="1m39.611700812s" podCreationTimestamp="2026-01-22 14:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:03.599454639 +0000 UTC m=+121.307204876" watchObservedRunningTime="2026-01-22 14:16:03.611700812 +0000 UTC m=+121.319451049" Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.661647 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-g55fr" Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.664514 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-g55fr" podStartSLOduration=8.664498485 podStartE2EDuration="8.664498485s" podCreationTimestamp="2026-01-22 14:15:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:03.662061579 +0000 UTC m=+121.369811826" watchObservedRunningTime="2026-01-22 14:16:03.664498485 +0000 UTC m=+121.372248722" Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.668888 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.669090 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82aebf1f-ffc4-46c5-a8b3-2873c83b39bc-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"82aebf1f-ffc4-46c5-a8b3-2873c83b39bc\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.669126 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/82aebf1f-ffc4-46c5-a8b3-2873c83b39bc-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"82aebf1f-ffc4-46c5-a8b3-2873c83b39bc\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 14:16:03 crc kubenswrapper[5099]: E0122 14:16:03.670261 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:04.170236102 +0000 UTC m=+121.877986349 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.670550 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/82aebf1f-ffc4-46c5-a8b3-2873c83b39bc-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"82aebf1f-ffc4-46c5-a8b3-2873c83b39bc\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.696398 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82aebf1f-ffc4-46c5-a8b3-2873c83b39bc-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"82aebf1f-ffc4-46c5-a8b3-2873c83b39bc\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.700388 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-4btxw" podStartSLOduration=8.700373289 podStartE2EDuration="8.700373289s" podCreationTimestamp="2026-01-22 14:15:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:03.697907652 +0000 UTC m=+121.405657889" watchObservedRunningTime="2026-01-22 14:16:03.700373289 +0000 UTC m=+121.408123526" Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.737439 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rj8rm" podStartSLOduration=99.737424206 podStartE2EDuration="1m39.737424206s" podCreationTimestamp="2026-01-22 14:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:03.736226894 +0000 UTC m=+121.443977151" watchObservedRunningTime="2026-01-22 14:16:03.737424206 +0000 UTC m=+121.445174433" Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.770447 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-v7shj" podStartSLOduration=99.770432162 podStartE2EDuration="1m39.770432162s" podCreationTimestamp="2026-01-22 14:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:03.76779981 +0000 UTC m=+121.475550047" watchObservedRunningTime="2026-01-22 14:16:03.770432162 +0000 UTC m=+121.478182399" Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.775825 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:03 crc kubenswrapper[5099]: E0122 14:16:03.776414 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:04.276401494 +0000 UTC m=+121.984151731 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.797092 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-t6hxb" podStartSLOduration=100.797072475 podStartE2EDuration="1m40.797072475s" podCreationTimestamp="2026-01-22 14:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:03.796453989 +0000 UTC m=+121.504204226" watchObservedRunningTime="2026-01-22 14:16:03.797072475 +0000 UTC m=+121.504822712" Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.826814 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k7g4m" podStartSLOduration=99.826775722 podStartE2EDuration="1m39.826775722s" podCreationTimestamp="2026-01-22 14:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:03.822431594 +0000 UTC m=+121.530181831" watchObservedRunningTime="2026-01-22 14:16:03.826775722 +0000 UTC m=+121.534525959" Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.886358 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:03 crc kubenswrapper[5099]: E0122 14:16:03.887310 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:04.387288236 +0000 UTC m=+122.095038473 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.907861 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-tgcgt" podStartSLOduration=99.907839353 podStartE2EDuration="1m39.907839353s" podCreationTimestamp="2026-01-22 14:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:03.907234217 +0000 UTC m=+121.614984454" watchObservedRunningTime="2026-01-22 14:16:03.907839353 +0000 UTC m=+121.615589590" Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.908788 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.908932 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-mvtfg" podStartSLOduration=99.908927013 podStartE2EDuration="1m39.908927013s" podCreationTimestamp="2026-01-22 14:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:03.874395075 +0000 UTC m=+121.582145312" watchObservedRunningTime="2026-01-22 14:16:03.908927013 +0000 UTC m=+121.616677250" Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.928635 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-5pr7k" Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.938297 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-5pr7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 14:16:03 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 22 14:16:03 crc kubenswrapper[5099]: [+]process-running ok Jan 22 14:16:03 crc kubenswrapper[5099]: healthz check failed Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.938698 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-5pr7k" podUID="b75d2eb4-efad-414a-8c7e-c64e0f83cb2b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 14:16:03 crc kubenswrapper[5099]: I0122 14:16:03.956401 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-cm8k9" podStartSLOduration=99.956374242 podStartE2EDuration="1m39.956374242s" podCreationTimestamp="2026-01-22 14:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:03.931722532 +0000 UTC m=+121.639472769" watchObservedRunningTime="2026-01-22 14:16:03.956374242 +0000 UTC m=+121.664124479" Jan 22 14:16:03 crc kubenswrapper[5099]: E0122 14:16:03.991273 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:04.491253329 +0000 UTC m=+122.199003566 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.000361 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.010267 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-gflg8" podStartSLOduration=101.010238544 podStartE2EDuration="1m41.010238544s" podCreationTimestamp="2026-01-22 14:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:04.009009692 +0000 UTC m=+121.716759929" watchObservedRunningTime="2026-01-22 14:16:04.010238544 +0000 UTC m=+121.717988781" Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.064870 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-zrclr" podStartSLOduration=100.064850047 podStartE2EDuration="1m40.064850047s" podCreationTimestamp="2026-01-22 14:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:04.060081878 +0000 UTC m=+121.767832115" watchObservedRunningTime="2026-01-22 14:16:04.064850047 +0000 UTC m=+121.772600284" Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.070909 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-j94jx" Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.102353 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a2c820d-64d7-48dc-845e-6fea4213ccbe-config-volume\") pod \"0a2c820d-64d7-48dc-845e-6fea4213ccbe\" (UID: \"0a2c820d-64d7-48dc-845e-6fea4213ccbe\") " Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.102622 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.102661 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a2c820d-64d7-48dc-845e-6fea4213ccbe-secret-volume\") pod \"0a2c820d-64d7-48dc-845e-6fea4213ccbe\" (UID: \"0a2c820d-64d7-48dc-845e-6fea4213ccbe\") " Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.102688 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gw9vf\" (UniqueName: \"kubernetes.io/projected/0a2c820d-64d7-48dc-845e-6fea4213ccbe-kube-api-access-gw9vf\") pod \"0a2c820d-64d7-48dc-845e-6fea4213ccbe\" (UID: \"0a2c820d-64d7-48dc-845e-6fea4213ccbe\") " Jan 22 14:16:04 crc kubenswrapper[5099]: E0122 14:16:04.104136 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:04.604107953 +0000 UTC m=+122.311858200 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.104592 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a2c820d-64d7-48dc-845e-6fea4213ccbe-config-volume" (OuterVolumeSpecName: "config-volume") pod "0a2c820d-64d7-48dc-845e-6fea4213ccbe" (UID: "0a2c820d-64d7-48dc-845e-6fea4213ccbe"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.109776 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-5pr7k" podStartSLOduration=100.109761807 podStartE2EDuration="1m40.109761807s" podCreationTimestamp="2026-01-22 14:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:04.107370233 +0000 UTC m=+121.815120470" watchObservedRunningTime="2026-01-22 14:16:04.109761807 +0000 UTC m=+121.817512044" Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.119757 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a2c820d-64d7-48dc-845e-6fea4213ccbe-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0a2c820d-64d7-48dc-845e-6fea4213ccbe" (UID: "0a2c820d-64d7-48dc-845e-6fea4213ccbe"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.123253 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a2c820d-64d7-48dc-845e-6fea4213ccbe-kube-api-access-gw9vf" (OuterVolumeSpecName: "kube-api-access-gw9vf") pod "0a2c820d-64d7-48dc-845e-6fea4213ccbe" (UID: "0a2c820d-64d7-48dc-845e-6fea4213ccbe"). InnerVolumeSpecName "kube-api-access-gw9vf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.158127 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-9g2zx" podStartSLOduration=9.15810976 podStartE2EDuration="9.15810976s" podCreationTimestamp="2026-01-22 14:15:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:04.157673549 +0000 UTC m=+121.865423786" watchObservedRunningTime="2026-01-22 14:16:04.15810976 +0000 UTC m=+121.865860007" Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.182785 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jw44r" podStartSLOduration=101.18276853 podStartE2EDuration="1m41.18276853s" podCreationTimestamp="2026-01-22 14:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:04.182142233 +0000 UTC m=+121.889892470" watchObservedRunningTime="2026-01-22 14:16:04.18276853 +0000 UTC m=+121.890518767" Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.204611 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.204672 5099 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a2c820d-64d7-48dc-845e-6fea4213ccbe-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.204683 5099 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a2c820d-64d7-48dc-845e-6fea4213ccbe-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.204693 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gw9vf\" (UniqueName: \"kubernetes.io/projected/0a2c820d-64d7-48dc-845e-6fea4213ccbe-kube-api-access-gw9vf\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:04 crc kubenswrapper[5099]: E0122 14:16:04.204949 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:04.704937422 +0000 UTC m=+122.412687659 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.207830 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-g55fr"] Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.306263 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-dz5c2" event={"ID":"e31d46d0-1fc7-4e35-9e8a-6f8033388332","Type":"ContainerStarted","Data":"36ca2b0ec4a1316af40727e2c4832866f4459750cf892f46484b2b4bbb27aef6"} Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.310084 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:04 crc kubenswrapper[5099]: E0122 14:16:04.310291 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:04.810260623 +0000 UTC m=+122.518010860 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.310471 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" event={"ID":"c42ec0c1-434b-41d2-b134-decdccef1700","Type":"ContainerStarted","Data":"39082f6d92611a07d3c0552d286b915f332f07844de3c95a113f7524051ec76a"} Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.310506 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:04 crc kubenswrapper[5099]: E0122 14:16:04.311069 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:04.811052924 +0000 UTC m=+122.518803161 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.352408 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qcmvq" event={"ID":"83de5641-52fb-4052-b0fc-28e02eef5f3a","Type":"ContainerStarted","Data":"defda42db1b94a9a157d88fabada8d14b4262dc916d57382b866987328828e39"} Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.353481 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-qcmvq" Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.360087 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-dz5c2" podStartSLOduration=101.360069225 podStartE2EDuration="1m41.360069225s" podCreationTimestamp="2026-01-22 14:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:04.356333034 +0000 UTC m=+122.064083281" watchObservedRunningTime="2026-01-22 14:16:04.360069225 +0000 UTC m=+122.067819472" Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.373351 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-j94jx" Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.374932 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-j94jx" event={"ID":"0a2c820d-64d7-48dc-845e-6fea4213ccbe","Type":"ContainerDied","Data":"c0bb71f0283f4a08232f8cdf313ce5be0fce745867c894ac9091f6968c3a0706"} Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.374970 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0bb71f0283f4a08232f8cdf313ce5be0fce745867c894ac9091f6968c3a0706" Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.379238 5099 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-9nglq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.379288 5099 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-9nglq" podUID="ad87e7e8-19c1-4c92-9400-9873a85e80b4" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.401352 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-qcmvq" podStartSLOduration=9.401299085 podStartE2EDuration="9.401299085s" podCreationTimestamp="2026-01-22 14:15:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:04.396612057 +0000 UTC m=+122.104362314" watchObservedRunningTime="2026-01-22 14:16:04.401299085 +0000 UTC m=+122.109049322" Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.409919 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rj8rm" Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.410813 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.411514 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:04 crc kubenswrapper[5099]: E0122 14:16:04.412746 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:04.912723435 +0000 UTC m=+122.620473672 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.412831 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-tgcgt" Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.532586 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:04 crc kubenswrapper[5099]: E0122 14:16:04.533051 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:05.033030092 +0000 UTC m=+122.740780329 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.633684 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:04 crc kubenswrapper[5099]: E0122 14:16:04.633857 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:05.13382688 +0000 UTC m=+122.841577127 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.634074 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:04 crc kubenswrapper[5099]: E0122 14:16:04.634460 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:05.134446986 +0000 UTC m=+122.842197223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.735299 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:04 crc kubenswrapper[5099]: E0122 14:16:04.735773 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:05.235737847 +0000 UTC m=+122.943488094 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.836472 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:04 crc kubenswrapper[5099]: E0122 14:16:04.836857 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:05.336844553 +0000 UTC m=+123.044594790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.938482 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:04 crc kubenswrapper[5099]: E0122 14:16:04.939321 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:05.439297985 +0000 UTC m=+123.147048222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.954228 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-5pr7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 14:16:04 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 22 14:16:04 crc kubenswrapper[5099]: [+]process-running ok Jan 22 14:16:04 crc kubenswrapper[5099]: healthz check failed Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.954324 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-5pr7k" podUID="b75d2eb4-efad-414a-8c7e-c64e0f83cb2b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 14:16:04 crc kubenswrapper[5099]: I0122 14:16:04.973474 5099 ???:1] "http: TLS handshake error from 192.168.126.11:51138: no serving certificate available for the kubelet" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.041453 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:05 crc kubenswrapper[5099]: E0122 14:16:05.042022 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:05.541999534 +0000 UTC m=+123.249749771 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.068904 5099 ???:1] "http: TLS handshake error from 192.168.126.11:51146: no serving certificate available for the kubelet" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.091545 5099 ???:1] "http: TLS handshake error from 192.168.126.11:51156: no serving certificate available for the kubelet" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.142585 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:05 crc kubenswrapper[5099]: E0122 14:16:05.143116 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:05.643094819 +0000 UTC m=+123.350845056 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.146377 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jcl5d"] Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.147068 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0a2c820d-64d7-48dc-845e-6fea4213ccbe" containerName="collect-profiles" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.147090 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a2c820d-64d7-48dc-845e-6fea4213ccbe" containerName="collect-profiles" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.147240 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="0a2c820d-64d7-48dc-845e-6fea4213ccbe" containerName="collect-profiles" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.153644 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jcl5d" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.156198 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.157991 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jcl5d"] Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.177764 5099 ???:1] "http: TLS handshake error from 192.168.126.11:51170: no serving certificate available for the kubelet" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.244840 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.244952 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6vb5\" (UniqueName: \"kubernetes.io/projected/fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5-kube-api-access-g6vb5\") pod \"community-operators-jcl5d\" (UID: \"fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5\") " pod="openshift-marketplace/community-operators-jcl5d" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.245000 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5-utilities\") pod \"community-operators-jcl5d\" (UID: \"fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5\") " pod="openshift-marketplace/community-operators-jcl5d" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.245029 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5-catalog-content\") pod \"community-operators-jcl5d\" (UID: \"fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5\") " pod="openshift-marketplace/community-operators-jcl5d" Jan 22 14:16:05 crc kubenswrapper[5099]: E0122 14:16:05.245260 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:05.745242043 +0000 UTC m=+123.452992280 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.271466 5099 ???:1] "http: TLS handshake error from 192.168.126.11:51184: no serving certificate available for the kubelet" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.345913 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:05 crc kubenswrapper[5099]: E0122 14:16:05.346066 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:05.84604085 +0000 UTC m=+123.553791097 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.346125 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5-utilities\") pod \"community-operators-jcl5d\" (UID: \"fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5\") " pod="openshift-marketplace/community-operators-jcl5d" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.346176 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5-catalog-content\") pod \"community-operators-jcl5d\" (UID: \"fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5\") " pod="openshift-marketplace/community-operators-jcl5d" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.346364 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.346466 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g6vb5\" (UniqueName: \"kubernetes.io/projected/fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5-kube-api-access-g6vb5\") pod \"community-operators-jcl5d\" (UID: \"fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5\") " pod="openshift-marketplace/community-operators-jcl5d" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.346655 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5-utilities\") pod \"community-operators-jcl5d\" (UID: \"fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5\") " pod="openshift-marketplace/community-operators-jcl5d" Jan 22 14:16:05 crc kubenswrapper[5099]: E0122 14:16:05.346816 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:05.846798211 +0000 UTC m=+123.554548448 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.346817 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5-catalog-content\") pod \"community-operators-jcl5d\" (UID: \"fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5\") " pod="openshift-marketplace/community-operators-jcl5d" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.347754 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gj5rr"] Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.358304 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gj5rr" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.362728 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gj5rr"] Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.363088 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.381133 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6vb5\" (UniqueName: \"kubernetes.io/projected/fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5-kube-api-access-g6vb5\") pod \"community-operators-jcl5d\" (UID: \"fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5\") " pod="openshift-marketplace/community-operators-jcl5d" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.383310 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"82aebf1f-ffc4-46c5-a8b3-2873c83b39bc","Type":"ContainerStarted","Data":"f5250739edfb60190ee032be18e40f1da748ad3328777c834b574cd0b366796a"} Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.383376 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"82aebf1f-ffc4-46c5-a8b3-2873c83b39bc","Type":"ContainerStarted","Data":"46cddf0c1ae666451b3fd0c90044980e22a1a6c797008aa99ce0a8dec3ce3f7e"} Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.392439 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wrhpn" event={"ID":"82eb9134-c59e-4bae-a74e-02ec60240232","Type":"ContainerStarted","Data":"31942815f20b24891650f01051d5932f27f6082eac0bceb12e408005fea04c92"} Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.392745 5099 ???:1] "http: TLS handshake error from 192.168.126.11:51190: no serving certificate available for the kubelet" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.402723 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-crc" podStartSLOduration=2.402680019 podStartE2EDuration="2.402680019s" podCreationTimestamp="2026-01-22 14:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:05.400396486 +0000 UTC m=+123.108146723" watchObservedRunningTime="2026-01-22 14:16:05.402680019 +0000 UTC m=+123.110430256" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.404380 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" event={"ID":"c42ec0c1-434b-41d2-b134-decdccef1700","Type":"ContainerStarted","Data":"1cdaf6a3cde8911951d59aa13abca6b132e726935f9c50bbbb5294c2e42f9ac3"} Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.406109 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-g55fr" podUID="4944256e-76fd-4652-80c6-a5f9217aadc3" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://2fac788e02f10ec85576f9fffa7ba022941fc907ea45deb0ffaacd11fcdcbfc4" gracePeriod=30 Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.430517 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" podStartSLOduration=101.430489744 podStartE2EDuration="1m41.430489744s" podCreationTimestamp="2026-01-22 14:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:05.428965713 +0000 UTC m=+123.136715950" watchObservedRunningTime="2026-01-22 14:16:05.430489744 +0000 UTC m=+123.138239971" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.448070 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.448253 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3266538-9050-43ad-a3d6-7428f83aa788-catalog-content\") pod \"certified-operators-gj5rr\" (UID: \"b3266538-9050-43ad-a3d6-7428f83aa788\") " pod="openshift-marketplace/certified-operators-gj5rr" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.448826 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9tcs\" (UniqueName: \"kubernetes.io/projected/b3266538-9050-43ad-a3d6-7428f83aa788-kube-api-access-p9tcs\") pod \"certified-operators-gj5rr\" (UID: \"b3266538-9050-43ad-a3d6-7428f83aa788\") " pod="openshift-marketplace/certified-operators-gj5rr" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.448858 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3266538-9050-43ad-a3d6-7428f83aa788-utilities\") pod \"certified-operators-gj5rr\" (UID: \"b3266538-9050-43ad-a3d6-7428f83aa788\") " pod="openshift-marketplace/certified-operators-gj5rr" Jan 22 14:16:05 crc kubenswrapper[5099]: E0122 14:16:05.448984 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:05.948960715 +0000 UTC m=+123.656710952 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.474563 5099 ???:1] "http: TLS handshake error from 192.168.126.11:51198: no serving certificate available for the kubelet" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.477010 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jcl5d" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.502363 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.518524 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.518768 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.521615 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.521831 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.546870 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-87mz9"] Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.550385 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p9tcs\" (UniqueName: \"kubernetes.io/projected/b3266538-9050-43ad-a3d6-7428f83aa788-kube-api-access-p9tcs\") pod \"certified-operators-gj5rr\" (UID: \"b3266538-9050-43ad-a3d6-7428f83aa788\") " pod="openshift-marketplace/certified-operators-gj5rr" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.550423 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3266538-9050-43ad-a3d6-7428f83aa788-utilities\") pod \"certified-operators-gj5rr\" (UID: \"b3266538-9050-43ad-a3d6-7428f83aa788\") " pod="openshift-marketplace/certified-operators-gj5rr" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.550446 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3266538-9050-43ad-a3d6-7428f83aa788-catalog-content\") pod \"certified-operators-gj5rr\" (UID: \"b3266538-9050-43ad-a3d6-7428f83aa788\") " pod="openshift-marketplace/certified-operators-gj5rr" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.550523 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/55b1116d-86c3-4f74-bc6b-a52f91ef5ba2-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"55b1116d-86c3-4f74-bc6b-a52f91ef5ba2\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.550542 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55b1116d-86c3-4f74-bc6b-a52f91ef5ba2-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"55b1116d-86c3-4f74-bc6b-a52f91ef5ba2\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.550576 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:05 crc kubenswrapper[5099]: E0122 14:16:05.551108 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:06.050929395 +0000 UTC m=+123.758679632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.551275 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3266538-9050-43ad-a3d6-7428f83aa788-catalog-content\") pod \"certified-operators-gj5rr\" (UID: \"b3266538-9050-43ad-a3d6-7428f83aa788\") " pod="openshift-marketplace/certified-operators-gj5rr" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.551488 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3266538-9050-43ad-a3d6-7428f83aa788-utilities\") pod \"certified-operators-gj5rr\" (UID: \"b3266538-9050-43ad-a3d6-7428f83aa788\") " pod="openshift-marketplace/certified-operators-gj5rr" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.559552 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-87mz9" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.564040 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-87mz9"] Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.573136 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9tcs\" (UniqueName: \"kubernetes.io/projected/b3266538-9050-43ad-a3d6-7428f83aa788-kube-api-access-p9tcs\") pod \"certified-operators-gj5rr\" (UID: \"b3266538-9050-43ad-a3d6-7428f83aa788\") " pod="openshift-marketplace/certified-operators-gj5rr" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.581309 5099 ???:1] "http: TLS handshake error from 192.168.126.11:51206: no serving certificate available for the kubelet" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.652893 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.653675 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9-utilities\") pod \"community-operators-87mz9\" (UID: \"1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9\") " pod="openshift-marketplace/community-operators-87mz9" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.653756 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/55b1116d-86c3-4f74-bc6b-a52f91ef5ba2-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"55b1116d-86c3-4f74-bc6b-a52f91ef5ba2\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.653787 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55b1116d-86c3-4f74-bc6b-a52f91ef5ba2-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"55b1116d-86c3-4f74-bc6b-a52f91ef5ba2\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.653828 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9-catalog-content\") pod \"community-operators-87mz9\" (UID: \"1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9\") " pod="openshift-marketplace/community-operators-87mz9" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.653865 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjnsj\" (UniqueName: \"kubernetes.io/projected/1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9-kube-api-access-xjnsj\") pod \"community-operators-87mz9\" (UID: \"1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9\") " pod="openshift-marketplace/community-operators-87mz9" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.654888 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/55b1116d-86c3-4f74-bc6b-a52f91ef5ba2-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"55b1116d-86c3-4f74-bc6b-a52f91ef5ba2\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 14:16:05 crc kubenswrapper[5099]: E0122 14:16:05.658200 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:06.158145686 +0000 UTC m=+123.865895923 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.687640 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55b1116d-86c3-4f74-bc6b-a52f91ef5ba2-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"55b1116d-86c3-4f74-bc6b-a52f91ef5ba2\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.696197 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gj5rr" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.753550 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mwp6p"] Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.759398 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.759438 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9-catalog-content\") pod \"community-operators-87mz9\" (UID: \"1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9\") " pod="openshift-marketplace/community-operators-87mz9" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.759466 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xjnsj\" (UniqueName: \"kubernetes.io/projected/1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9-kube-api-access-xjnsj\") pod \"community-operators-87mz9\" (UID: \"1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9\") " pod="openshift-marketplace/community-operators-87mz9" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.759550 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9-utilities\") pod \"community-operators-87mz9\" (UID: \"1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9\") " pod="openshift-marketplace/community-operators-87mz9" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.759998 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9-utilities\") pod \"community-operators-87mz9\" (UID: \"1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9\") " pod="openshift-marketplace/community-operators-87mz9" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.760034 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9-catalog-content\") pod \"community-operators-87mz9\" (UID: \"1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9\") " pod="openshift-marketplace/community-operators-87mz9" Jan 22 14:16:05 crc kubenswrapper[5099]: E0122 14:16:05.760302 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:06.26028967 +0000 UTC m=+123.968039907 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.763945 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mwp6p" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.767521 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mwp6p"] Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.780384 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jcl5d"] Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.808848 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjnsj\" (UniqueName: \"kubernetes.io/projected/1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9-kube-api-access-xjnsj\") pod \"community-operators-87mz9\" (UID: \"1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9\") " pod="openshift-marketplace/community-operators-87mz9" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.816790 5099 patch_prober.go:28] interesting pod/downloads-747b44746d-ldzlj container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.816873 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-ldzlj" podUID="513664b2-32c9-4290-9ae7-2400a1c4da84" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.862030 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.862259 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn8ct\" (UniqueName: \"kubernetes.io/projected/e94546d4-9e10-402a-9fc3-2a1c8f755713-kube-api-access-dn8ct\") pod \"certified-operators-mwp6p\" (UID: \"e94546d4-9e10-402a-9fc3-2a1c8f755713\") " pod="openshift-marketplace/certified-operators-mwp6p" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.862437 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e94546d4-9e10-402a-9fc3-2a1c8f755713-catalog-content\") pod \"certified-operators-mwp6p\" (UID: \"e94546d4-9e10-402a-9fc3-2a1c8f755713\") " pod="openshift-marketplace/certified-operators-mwp6p" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.862501 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e94546d4-9e10-402a-9fc3-2a1c8f755713-utilities\") pod \"certified-operators-mwp6p\" (UID: \"e94546d4-9e10-402a-9fc3-2a1c8f755713\") " pod="openshift-marketplace/certified-operators-mwp6p" Jan 22 14:16:05 crc kubenswrapper[5099]: E0122 14:16:05.862635 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:06.362612688 +0000 UTC m=+124.070362925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.903303 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.903413 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-87mz9" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.931184 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-5pr7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 14:16:05 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 22 14:16:05 crc kubenswrapper[5099]: [+]process-running ok Jan 22 14:16:05 crc kubenswrapper[5099]: healthz check failed Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.931674 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-5pr7k" podUID="b75d2eb4-efad-414a-8c7e-c64e0f83cb2b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.947614 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gj5rr"] Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.965767 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.965882 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e94546d4-9e10-402a-9fc3-2a1c8f755713-catalog-content\") pod \"certified-operators-mwp6p\" (UID: \"e94546d4-9e10-402a-9fc3-2a1c8f755713\") " pod="openshift-marketplace/certified-operators-mwp6p" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.966003 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e94546d4-9e10-402a-9fc3-2a1c8f755713-utilities\") pod \"certified-operators-mwp6p\" (UID: \"e94546d4-9e10-402a-9fc3-2a1c8f755713\") " pod="openshift-marketplace/certified-operators-mwp6p" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.966087 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dn8ct\" (UniqueName: \"kubernetes.io/projected/e94546d4-9e10-402a-9fc3-2a1c8f755713-kube-api-access-dn8ct\") pod \"certified-operators-mwp6p\" (UID: \"e94546d4-9e10-402a-9fc3-2a1c8f755713\") " pod="openshift-marketplace/certified-operators-mwp6p" Jan 22 14:16:05 crc kubenswrapper[5099]: E0122 14:16:05.966864 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:06.46684674 +0000 UTC m=+124.174596977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.967103 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e94546d4-9e10-402a-9fc3-2a1c8f755713-catalog-content\") pod \"certified-operators-mwp6p\" (UID: \"e94546d4-9e10-402a-9fc3-2a1c8f755713\") " pod="openshift-marketplace/certified-operators-mwp6p" Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.967235 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e94546d4-9e10-402a-9fc3-2a1c8f755713-utilities\") pod \"certified-operators-mwp6p\" (UID: \"e94546d4-9e10-402a-9fc3-2a1c8f755713\") " pod="openshift-marketplace/certified-operators-mwp6p" Jan 22 14:16:05 crc kubenswrapper[5099]: W0122 14:16:05.968695 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3266538_9050_43ad_a3d6_7428f83aa788.slice/crio-bf97dcb27cac54113547a234b424212bb34d0766b85b765c26bfec6e9ca9d1ad WatchSource:0}: Error finding container bf97dcb27cac54113547a234b424212bb34d0766b85b765c26bfec6e9ca9d1ad: Status 404 returned error can't find the container with id bf97dcb27cac54113547a234b424212bb34d0766b85b765c26bfec6e9ca9d1ad Jan 22 14:16:05 crc kubenswrapper[5099]: I0122 14:16:05.994980 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn8ct\" (UniqueName: \"kubernetes.io/projected/e94546d4-9e10-402a-9fc3-2a1c8f755713-kube-api-access-dn8ct\") pod \"certified-operators-mwp6p\" (UID: \"e94546d4-9e10-402a-9fc3-2a1c8f755713\") " pod="openshift-marketplace/certified-operators-mwp6p" Jan 22 14:16:06 crc kubenswrapper[5099]: I0122 14:16:06.070748 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:06 crc kubenswrapper[5099]: E0122 14:16:06.071233 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:06.571214983 +0000 UTC m=+124.278965220 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:06 crc kubenswrapper[5099]: I0122 14:16:06.087464 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mwp6p" Jan 22 14:16:06 crc kubenswrapper[5099]: I0122 14:16:06.172256 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:06 crc kubenswrapper[5099]: E0122 14:16:06.172606 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:06.672590087 +0000 UTC m=+124.380340324 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:06 crc kubenswrapper[5099]: I0122 14:16:06.240226 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-87mz9"] Jan 22 14:16:06 crc kubenswrapper[5099]: I0122 14:16:06.268334 5099 ???:1] "http: TLS handshake error from 192.168.126.11:51214: no serving certificate available for the kubelet" Jan 22 14:16:06 crc kubenswrapper[5099]: I0122 14:16:06.273938 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:06 crc kubenswrapper[5099]: E0122 14:16:06.274423 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:06.774405561 +0000 UTC m=+124.482155798 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:06 crc kubenswrapper[5099]: W0122 14:16:06.310076 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a3e9623_ffb0_4f51_bba2_9e56a1b3ddb9.slice/crio-8684956b0f6dadb4ddbe93d707b7ba9ccce9fbc6d182ecd1d05787d9db148cbb WatchSource:0}: Error finding container 8684956b0f6dadb4ddbe93d707b7ba9ccce9fbc6d182ecd1d05787d9db148cbb: Status 404 returned error can't find the container with id 8684956b0f6dadb4ddbe93d707b7ba9ccce9fbc6d182ecd1d05787d9db148cbb Jan 22 14:16:06 crc kubenswrapper[5099]: I0122 14:16:06.375824 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:06 crc kubenswrapper[5099]: E0122 14:16:06.376210 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:06.876194936 +0000 UTC m=+124.583945173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:06 crc kubenswrapper[5099]: I0122 14:16:06.435389 5099 generic.go:358] "Generic (PLEG): container finished" podID="b3266538-9050-43ad-a3d6-7428f83aa788" containerID="67d19507ea1b84109f4145080b188fa110af00707d9f4e67b95eecd65ca28133" exitCode=0 Jan 22 14:16:06 crc kubenswrapper[5099]: I0122 14:16:06.435531 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gj5rr" event={"ID":"b3266538-9050-43ad-a3d6-7428f83aa788","Type":"ContainerDied","Data":"67d19507ea1b84109f4145080b188fa110af00707d9f4e67b95eecd65ca28133"} Jan 22 14:16:06 crc kubenswrapper[5099]: I0122 14:16:06.435794 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gj5rr" event={"ID":"b3266538-9050-43ad-a3d6-7428f83aa788","Type":"ContainerStarted","Data":"bf97dcb27cac54113547a234b424212bb34d0766b85b765c26bfec6e9ca9d1ad"} Jan 22 14:16:06 crc kubenswrapper[5099]: I0122 14:16:06.450998 5099 generic.go:358] "Generic (PLEG): container finished" podID="fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5" containerID="95cbbc688bc860d2fc52566b28db209114d0f7a0bb2c36bf7703acda17639d55" exitCode=0 Jan 22 14:16:06 crc kubenswrapper[5099]: I0122 14:16:06.451385 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jcl5d" event={"ID":"fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5","Type":"ContainerDied","Data":"95cbbc688bc860d2fc52566b28db209114d0f7a0bb2c36bf7703acda17639d55"} Jan 22 14:16:06 crc kubenswrapper[5099]: I0122 14:16:06.451436 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jcl5d" event={"ID":"fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5","Type":"ContainerStarted","Data":"7ca3725d80c5fd713bcd44189c268fa62053fd1c1d6cef23eeea9f6742faba08"} Jan 22 14:16:06 crc kubenswrapper[5099]: I0122 14:16:06.452231 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-87mz9" event={"ID":"1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9","Type":"ContainerStarted","Data":"8684956b0f6dadb4ddbe93d707b7ba9ccce9fbc6d182ecd1d05787d9db148cbb"} Jan 22 14:16:06 crc kubenswrapper[5099]: I0122 14:16:06.468597 5099 generic.go:358] "Generic (PLEG): container finished" podID="82aebf1f-ffc4-46c5-a8b3-2873c83b39bc" containerID="f5250739edfb60190ee032be18e40f1da748ad3328777c834b574cd0b366796a" exitCode=0 Jan 22 14:16:06 crc kubenswrapper[5099]: I0122 14:16:06.469539 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"82aebf1f-ffc4-46c5-a8b3-2873c83b39bc","Type":"ContainerDied","Data":"f5250739edfb60190ee032be18e40f1da748ad3328777c834b574cd0b366796a"} Jan 22 14:16:06 crc kubenswrapper[5099]: I0122 14:16:06.479816 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:06 crc kubenswrapper[5099]: E0122 14:16:06.480133 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:06.980099537 +0000 UTC m=+124.687849774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:06 crc kubenswrapper[5099]: I0122 14:16:06.480513 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:06 crc kubenswrapper[5099]: E0122 14:16:06.480967 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:06.980956281 +0000 UTC m=+124.688706518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:06 crc kubenswrapper[5099]: I0122 14:16:06.496313 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 22 14:16:06 crc kubenswrapper[5099]: I0122 14:16:06.519639 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mwp6p"] Jan 22 14:16:06 crc kubenswrapper[5099]: I0122 14:16:06.581804 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:06 crc kubenswrapper[5099]: E0122 14:16:06.583468 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:07.083449614 +0000 UTC m=+124.791199851 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:06 crc kubenswrapper[5099]: I0122 14:16:06.684217 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:06 crc kubenswrapper[5099]: E0122 14:16:06.684772 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:07.184757166 +0000 UTC m=+124.892507403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:06 crc kubenswrapper[5099]: I0122 14:16:06.785886 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:06 crc kubenswrapper[5099]: E0122 14:16:06.786213 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:07.28619578 +0000 UTC m=+124.993946017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:06 crc kubenswrapper[5099]: I0122 14:16:06.887641 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:06 crc kubenswrapper[5099]: E0122 14:16:06.888052 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:07.388032666 +0000 UTC m=+125.095782903 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:06 crc kubenswrapper[5099]: I0122 14:16:06.934860 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-5pr7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 14:16:06 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 22 14:16:06 crc kubenswrapper[5099]: [+]process-running ok Jan 22 14:16:06 crc kubenswrapper[5099]: healthz check failed Jan 22 14:16:06 crc kubenswrapper[5099]: I0122 14:16:06.935266 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-5pr7k" podUID="b75d2eb4-efad-414a-8c7e-c64e0f83cb2b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 14:16:06 crc kubenswrapper[5099]: I0122 14:16:06.988893 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:06 crc kubenswrapper[5099]: E0122 14:16:06.989073 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:07.489042149 +0000 UTC m=+125.196792376 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:06 crc kubenswrapper[5099]: I0122 14:16:06.989460 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:06 crc kubenswrapper[5099]: E0122 14:16:06.989796 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:07.489783618 +0000 UTC m=+125.197533855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.091205 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:07 crc kubenswrapper[5099]: E0122 14:16:07.092528 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:07.592487518 +0000 UTC m=+125.300237775 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.150062 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k7vcn"] Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.156969 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k7vcn" Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.159054 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.175480 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7vcn"] Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.193523 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:07 crc kubenswrapper[5099]: E0122 14:16:07.194207 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:07.694141758 +0000 UTC m=+125.401891995 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.201024 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-cm8k9" Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.201562 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-cm8k9" Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.211431 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-cm8k9" Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.295724 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:07 crc kubenswrapper[5099]: E0122 14:16:07.295877 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:07.79584352 +0000 UTC m=+125.503593767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.296310 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83ff52d5-f127-494f-b2bd-e9a98e556392-catalog-content\") pod \"redhat-marketplace-k7vcn\" (UID: \"83ff52d5-f127-494f-b2bd-e9a98e556392\") " pod="openshift-marketplace/redhat-marketplace-k7vcn" Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.296476 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83ff52d5-f127-494f-b2bd-e9a98e556392-utilities\") pod \"redhat-marketplace-k7vcn\" (UID: \"83ff52d5-f127-494f-b2bd-e9a98e556392\") " pod="openshift-marketplace/redhat-marketplace-k7vcn" Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.296608 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knkkd\" (UniqueName: \"kubernetes.io/projected/83ff52d5-f127-494f-b2bd-e9a98e556392-kube-api-access-knkkd\") pod \"redhat-marketplace-k7vcn\" (UID: \"83ff52d5-f127-494f-b2bd-e9a98e556392\") " pod="openshift-marketplace/redhat-marketplace-k7vcn" Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.296677 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:07 crc kubenswrapper[5099]: E0122 14:16:07.296998 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:07.796989212 +0000 UTC m=+125.504739459 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.398244 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:07 crc kubenswrapper[5099]: E0122 14:16:07.398478 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:07.898428016 +0000 UTC m=+125.606178253 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.398609 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83ff52d5-f127-494f-b2bd-e9a98e556392-catalog-content\") pod \"redhat-marketplace-k7vcn\" (UID: \"83ff52d5-f127-494f-b2bd-e9a98e556392\") " pod="openshift-marketplace/redhat-marketplace-k7vcn" Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.398733 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83ff52d5-f127-494f-b2bd-e9a98e556392-utilities\") pod \"redhat-marketplace-k7vcn\" (UID: \"83ff52d5-f127-494f-b2bd-e9a98e556392\") " pod="openshift-marketplace/redhat-marketplace-k7vcn" Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.398847 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-knkkd\" (UniqueName: \"kubernetes.io/projected/83ff52d5-f127-494f-b2bd-e9a98e556392-kube-api-access-knkkd\") pod \"redhat-marketplace-k7vcn\" (UID: \"83ff52d5-f127-494f-b2bd-e9a98e556392\") " pod="openshift-marketplace/redhat-marketplace-k7vcn" Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.399066 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83ff52d5-f127-494f-b2bd-e9a98e556392-catalog-content\") pod \"redhat-marketplace-k7vcn\" (UID: \"83ff52d5-f127-494f-b2bd-e9a98e556392\") " pod="openshift-marketplace/redhat-marketplace-k7vcn" Jan 22 14:16:07 crc kubenswrapper[5099]: E0122 14:16:07.399310 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:07.89929406 +0000 UTC m=+125.607044297 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.399438 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83ff52d5-f127-494f-b2bd-e9a98e556392-utilities\") pod \"redhat-marketplace-k7vcn\" (UID: \"83ff52d5-f127-494f-b2bd-e9a98e556392\") " pod="openshift-marketplace/redhat-marketplace-k7vcn" Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.399450 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.445557 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-knkkd\" (UniqueName: \"kubernetes.io/projected/83ff52d5-f127-494f-b2bd-e9a98e556392-kube-api-access-knkkd\") pod \"redhat-marketplace-k7vcn\" (UID: \"83ff52d5-f127-494f-b2bd-e9a98e556392\") " pod="openshift-marketplace/redhat-marketplace-k7vcn" Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.481183 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k7vcn" Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.484645 5099 generic.go:358] "Generic (PLEG): container finished" podID="1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9" containerID="bde296aef27edbe12383a5817cfbe46783277527091cc76426a71226f9c9cf22" exitCode=0 Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.484715 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-87mz9" event={"ID":"1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9","Type":"ContainerDied","Data":"bde296aef27edbe12383a5817cfbe46783277527091cc76426a71226f9c9cf22"} Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.491714 5099 generic.go:358] "Generic (PLEG): container finished" podID="e94546d4-9e10-402a-9fc3-2a1c8f755713" containerID="c67d05b65211f33407c27af6faa05bf265e897e0c27969a55f35cb2e73c70ffd" exitCode=0 Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.491978 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwp6p" event={"ID":"e94546d4-9e10-402a-9fc3-2a1c8f755713","Type":"ContainerDied","Data":"c67d05b65211f33407c27af6faa05bf265e897e0c27969a55f35cb2e73c70ffd"} Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.492103 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwp6p" event={"ID":"e94546d4-9e10-402a-9fc3-2a1c8f755713","Type":"ContainerStarted","Data":"c08103ba40b3fd85dcdc2a6a0e7fc423803621fcf77e35aa16af201b3cb99c57"} Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.501072 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:07 crc kubenswrapper[5099]: E0122 14:16:07.501440 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:08.001420743 +0000 UTC m=+125.709170980 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.543454 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"55b1116d-86c3-4f74-bc6b-a52f91ef5ba2","Type":"ContainerStarted","Data":"2b69550d1d37c7156ba36e8a72c755fca1fb12c7c85f8cece3e55dddfda53b89"} Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.545232 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"55b1116d-86c3-4f74-bc6b-a52f91ef5ba2","Type":"ContainerStarted","Data":"7974b8752567bca4631f57167468a5ae4afeb5af7c696cafd9b0fbe649741f14"} Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.551948 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w7w9q"] Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.572127 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=2.5721028219999997 podStartE2EDuration="2.572102822s" podCreationTimestamp="2026-01-22 14:16:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:07.570020756 +0000 UTC m=+125.277771003" watchObservedRunningTime="2026-01-22 14:16:07.572102822 +0000 UTC m=+125.279853059" Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.576311 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7w9q"] Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.576506 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w7w9q" Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.578729 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-cm8k9" Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.604933 5099 ???:1] "http: TLS handshake error from 192.168.126.11:51228: no serving certificate available for the kubelet" Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.605065 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:07 crc kubenswrapper[5099]: E0122 14:16:07.605604 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:08.105584842 +0000 UTC m=+125.813335079 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.708204 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:07 crc kubenswrapper[5099]: E0122 14:16:07.708370 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:08.208343683 +0000 UTC m=+125.916093920 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.708615 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.708688 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4jvm\" (UniqueName: \"kubernetes.io/projected/7779fac0-5e1b-4d4c-bd8e-299416704f12-kube-api-access-w4jvm\") pod \"redhat-marketplace-w7w9q\" (UID: \"7779fac0-5e1b-4d4c-bd8e-299416704f12\") " pod="openshift-marketplace/redhat-marketplace-w7w9q" Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.708806 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7779fac0-5e1b-4d4c-bd8e-299416704f12-catalog-content\") pod \"redhat-marketplace-w7w9q\" (UID: \"7779fac0-5e1b-4d4c-bd8e-299416704f12\") " pod="openshift-marketplace/redhat-marketplace-w7w9q" Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.708992 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7779fac0-5e1b-4d4c-bd8e-299416704f12-utilities\") pod \"redhat-marketplace-w7w9q\" (UID: \"7779fac0-5e1b-4d4c-bd8e-299416704f12\") " pod="openshift-marketplace/redhat-marketplace-w7w9q" Jan 22 14:16:07 crc kubenswrapper[5099]: E0122 14:16:07.709122 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:08.209105753 +0000 UTC m=+125.916855990 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.815862 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.816533 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7779fac0-5e1b-4d4c-bd8e-299416704f12-utilities\") pod \"redhat-marketplace-w7w9q\" (UID: \"7779fac0-5e1b-4d4c-bd8e-299416704f12\") " pod="openshift-marketplace/redhat-marketplace-w7w9q" Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.816986 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7779fac0-5e1b-4d4c-bd8e-299416704f12-utilities\") pod \"redhat-marketplace-w7w9q\" (UID: \"7779fac0-5e1b-4d4c-bd8e-299416704f12\") " pod="openshift-marketplace/redhat-marketplace-w7w9q" Jan 22 14:16:07 crc kubenswrapper[5099]: E0122 14:16:07.817054 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:08.317036564 +0000 UTC m=+126.024786801 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.817088 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.817144 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w4jvm\" (UniqueName: \"kubernetes.io/projected/7779fac0-5e1b-4d4c-bd8e-299416704f12-kube-api-access-w4jvm\") pod \"redhat-marketplace-w7w9q\" (UID: \"7779fac0-5e1b-4d4c-bd8e-299416704f12\") " pod="openshift-marketplace/redhat-marketplace-w7w9q" Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.817260 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7779fac0-5e1b-4d4c-bd8e-299416704f12-catalog-content\") pod \"redhat-marketplace-w7w9q\" (UID: \"7779fac0-5e1b-4d4c-bd8e-299416704f12\") " pod="openshift-marketplace/redhat-marketplace-w7w9q" Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.817562 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7779fac0-5e1b-4d4c-bd8e-299416704f12-catalog-content\") pod \"redhat-marketplace-w7w9q\" (UID: \"7779fac0-5e1b-4d4c-bd8e-299416704f12\") " pod="openshift-marketplace/redhat-marketplace-w7w9q" Jan 22 14:16:07 crc kubenswrapper[5099]: E0122 14:16:07.817794 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:16:08.317787615 +0000 UTC m=+126.025537852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-psxhg" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.860665 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4jvm\" (UniqueName: \"kubernetes.io/projected/7779fac0-5e1b-4d4c-bd8e-299416704f12-kube-api-access-w4jvm\") pod \"redhat-marketplace-w7w9q\" (UID: \"7779fac0-5e1b-4d4c-bd8e-299416704f12\") " pod="openshift-marketplace/redhat-marketplace-w7w9q" Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.916731 5099 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.925739 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w7w9q" Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.926508 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:07 crc kubenswrapper[5099]: E0122 14:16:07.927036 5099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:08.427014831 +0000 UTC m=+126.134765068 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.931953 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-5pr7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 14:16:07 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 22 14:16:07 crc kubenswrapper[5099]: [+]process-running ok Jan 22 14:16:07 crc kubenswrapper[5099]: healthz check failed Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.932032 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-5pr7k" podUID="b75d2eb4-efad-414a-8c7e-c64e0f83cb2b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.941246 5099 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-22T14:16:07.916760962Z","UUID":"c03f90a1-1da2-4529-94b0-33e5cf35bab1","Handler":null,"Name":"","Endpoint":""} Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.953340 5099 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 22 14:16:07 crc kubenswrapper[5099]: I0122 14:16:07.953384 5099 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.028949 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.034080 5099 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.034130 5099 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.036064 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7vcn"] Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.048968 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-gflg8" Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.050089 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-gflg8" Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.060792 5099 patch_prober.go:28] interesting pod/console-64d44f6ddf-gflg8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.061031 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-gflg8" podUID="e07e6e00-cfcc-4513-b231-8d27833d8687" containerName="console" probeResult="failure" output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.075911 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.097959 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-psxhg\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.132373 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.164587 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.225760 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.233830 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.235700 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82aebf1f-ffc4-46c5-a8b3-2873c83b39bc-kube-api-access\") pod \"82aebf1f-ffc4-46c5-a8b3-2873c83b39bc\" (UID: \"82aebf1f-ffc4-46c5-a8b3-2873c83b39bc\") " Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.237039 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/82aebf1f-ffc4-46c5-a8b3-2873c83b39bc-kubelet-dir\") pod \"82aebf1f-ffc4-46c5-a8b3-2873c83b39bc\" (UID: \"82aebf1f-ffc4-46c5-a8b3-2873c83b39bc\") " Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.237453 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82aebf1f-ffc4-46c5-a8b3-2873c83b39bc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "82aebf1f-ffc4-46c5-a8b3-2873c83b39bc" (UID: "82aebf1f-ffc4-46c5-a8b3-2873c83b39bc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.238394 5099 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/82aebf1f-ffc4-46c5-a8b3-2873c83b39bc-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.244825 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82aebf1f-ffc4-46c5-a8b3-2873c83b39bc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "82aebf1f-ffc4-46c5-a8b3-2873c83b39bc" (UID: "82aebf1f-ffc4-46c5-a8b3-2873c83b39bc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.340123 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82aebf1f-ffc4-46c5-a8b3-2873c83b39bc-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.482766 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7w9q"] Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.492789 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-psxhg"] Jan 22 14:16:08 crc kubenswrapper[5099]: W0122 14:16:08.495940 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7779fac0_5e1b_4d4c_bd8e_299416704f12.slice/crio-8132dbab6c682f8f528b5ee34e29a0865d991f97f26e1f46f0680714c5efd2e1 WatchSource:0}: Error finding container 8132dbab6c682f8f528b5ee34e29a0865d991f97f26e1f46f0680714c5efd2e1: Status 404 returned error can't find the container with id 8132dbab6c682f8f528b5ee34e29a0865d991f97f26e1f46f0680714c5efd2e1 Jan 22 14:16:08 crc kubenswrapper[5099]: W0122 14:16:08.502207 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00219034_b44a_4db2_ad80_b04ff5eacac5.slice/crio-be36cb98065791dd270cf007b6efe356edbf0749dd26dcbd724c5a006702b190 WatchSource:0}: Error finding container be36cb98065791dd270cf007b6efe356edbf0749dd26dcbd724c5a006702b190: Status 404 returned error can't find the container with id be36cb98065791dd270cf007b6efe356edbf0749dd26dcbd724c5a006702b190 Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.546203 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-79vth"] Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.546742 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="82aebf1f-ffc4-46c5-a8b3-2873c83b39bc" containerName="pruner" Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.546754 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="82aebf1f-ffc4-46c5-a8b3-2873c83b39bc" containerName="pruner" Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.546840 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="82aebf1f-ffc4-46c5-a8b3-2873c83b39bc" containerName="pruner" Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.554609 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-79vth" Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.557824 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.560962 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-79vth"] Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.564766 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.564881 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"82aebf1f-ffc4-46c5-a8b3-2873c83b39bc","Type":"ContainerDied","Data":"46cddf0c1ae666451b3fd0c90044980e22a1a6c797008aa99ce0a8dec3ce3f7e"} Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.564922 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46cddf0c1ae666451b3fd0c90044980e22a1a6c797008aa99ce0a8dec3ce3f7e" Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.574089 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wrhpn" event={"ID":"82eb9134-c59e-4bae-a74e-02ec60240232","Type":"ContainerStarted","Data":"e97e1e07d7c0bc0a38f6b3e7d95679b4e5efb847dbf5f8bba0b5a4976058c9d8"} Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.574134 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wrhpn" event={"ID":"82eb9134-c59e-4bae-a74e-02ec60240232","Type":"ContainerStarted","Data":"f3ed8e2d48b893140565b7a163e7ab24f77c526ed6f5bd73218961335b69ab78"} Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.574146 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wrhpn" event={"ID":"82eb9134-c59e-4bae-a74e-02ec60240232","Type":"ContainerStarted","Data":"491b1271a494e89b4250f8cd011ab6192767bfc4296888992ea3ac7054c8b8eb"} Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.578177 5099 generic.go:358] "Generic (PLEG): container finished" podID="55b1116d-86c3-4f74-bc6b-a52f91ef5ba2" containerID="2b69550d1d37c7156ba36e8a72c755fca1fb12c7c85f8cece3e55dddfda53b89" exitCode=0 Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.578447 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"55b1116d-86c3-4f74-bc6b-a52f91ef5ba2","Type":"ContainerDied","Data":"2b69550d1d37c7156ba36e8a72c755fca1fb12c7c85f8cece3e55dddfda53b89"} Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.582753 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-psxhg" event={"ID":"00219034-b44a-4db2-ad80-b04ff5eacac5","Type":"ContainerStarted","Data":"be36cb98065791dd270cf007b6efe356edbf0749dd26dcbd724c5a006702b190"} Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.584194 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7w9q" event={"ID":"7779fac0-5e1b-4d4c-bd8e-299416704f12","Type":"ContainerStarted","Data":"8132dbab6c682f8f528b5ee34e29a0865d991f97f26e1f46f0680714c5efd2e1"} Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.629738 5099 generic.go:358] "Generic (PLEG): container finished" podID="83ff52d5-f127-494f-b2bd-e9a98e556392" containerID="9e47157eef931cc988aef84c69b28d3a4b69adda50a02b81dfdcd7f8fd726ee4" exitCode=0 Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.631067 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7vcn" event={"ID":"83ff52d5-f127-494f-b2bd-e9a98e556392","Type":"ContainerDied","Data":"9e47157eef931cc988aef84c69b28d3a4b69adda50a02b81dfdcd7f8fd726ee4"} Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.631101 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7vcn" event={"ID":"83ff52d5-f127-494f-b2bd-e9a98e556392","Type":"ContainerStarted","Data":"18316826833ea9a60b2567baea3eaf70298a599a266fa7865f29018033241963"} Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.632963 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-wrhpn" podStartSLOduration=13.632942431 podStartE2EDuration="13.632942431s" podCreationTimestamp="2026-01-22 14:15:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:08.631073961 +0000 UTC m=+126.338824198" watchObservedRunningTime="2026-01-22 14:16:08.632942431 +0000 UTC m=+126.340692668" Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.657229 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe-utilities\") pod \"redhat-operators-79vth\" (UID: \"2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe\") " pod="openshift-marketplace/redhat-operators-79vth" Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.658262 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe-catalog-content\") pod \"redhat-operators-79vth\" (UID: \"2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe\") " pod="openshift-marketplace/redhat-operators-79vth" Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.658384 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9wgl\" (UniqueName: \"kubernetes.io/projected/2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe-kube-api-access-s9wgl\") pod \"redhat-operators-79vth\" (UID: \"2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe\") " pod="openshift-marketplace/redhat-operators-79vth" Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.760249 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe-utilities\") pod \"redhat-operators-79vth\" (UID: \"2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe\") " pod="openshift-marketplace/redhat-operators-79vth" Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.760954 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe-catalog-content\") pod \"redhat-operators-79vth\" (UID: \"2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe\") " pod="openshift-marketplace/redhat-operators-79vth" Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.761026 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s9wgl\" (UniqueName: \"kubernetes.io/projected/2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe-kube-api-access-s9wgl\") pod \"redhat-operators-79vth\" (UID: \"2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe\") " pod="openshift-marketplace/redhat-operators-79vth" Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.761660 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe-catalog-content\") pod \"redhat-operators-79vth\" (UID: \"2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe\") " pod="openshift-marketplace/redhat-operators-79vth" Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.761935 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe-utilities\") pod \"redhat-operators-79vth\" (UID: \"2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe\") " pod="openshift-marketplace/redhat-operators-79vth" Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.773758 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.786047 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9wgl\" (UniqueName: \"kubernetes.io/projected/2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe-kube-api-access-s9wgl\") pod \"redhat-operators-79vth\" (UID: \"2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe\") " pod="openshift-marketplace/redhat-operators-79vth" Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.914140 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-79vth" Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.931596 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-5pr7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 14:16:08 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 22 14:16:08 crc kubenswrapper[5099]: [+]process-running ok Jan 22 14:16:08 crc kubenswrapper[5099]: healthz check failed Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.931933 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-5pr7k" podUID="b75d2eb4-efad-414a-8c7e-c64e0f83cb2b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.949510 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hwzrn"] Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.962411 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hwzrn" Jan 22 14:16:08 crc kubenswrapper[5099]: I0122 14:16:08.971767 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hwzrn"] Jan 22 14:16:09 crc kubenswrapper[5099]: I0122 14:16:09.065771 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc17a8e9-de13-44d6-aa07-d13560bcd275-utilities\") pod \"redhat-operators-hwzrn\" (UID: \"fc17a8e9-de13-44d6-aa07-d13560bcd275\") " pod="openshift-marketplace/redhat-operators-hwzrn" Jan 22 14:16:09 crc kubenswrapper[5099]: I0122 14:16:09.065826 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfggw\" (UniqueName: \"kubernetes.io/projected/fc17a8e9-de13-44d6-aa07-d13560bcd275-kube-api-access-zfggw\") pod \"redhat-operators-hwzrn\" (UID: \"fc17a8e9-de13-44d6-aa07-d13560bcd275\") " pod="openshift-marketplace/redhat-operators-hwzrn" Jan 22 14:16:09 crc kubenswrapper[5099]: I0122 14:16:09.066008 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc17a8e9-de13-44d6-aa07-d13560bcd275-catalog-content\") pod \"redhat-operators-hwzrn\" (UID: \"fc17a8e9-de13-44d6-aa07-d13560bcd275\") " pod="openshift-marketplace/redhat-operators-hwzrn" Jan 22 14:16:09 crc kubenswrapper[5099]: I0122 14:16:09.168963 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc17a8e9-de13-44d6-aa07-d13560bcd275-utilities\") pod \"redhat-operators-hwzrn\" (UID: \"fc17a8e9-de13-44d6-aa07-d13560bcd275\") " pod="openshift-marketplace/redhat-operators-hwzrn" Jan 22 14:16:09 crc kubenswrapper[5099]: I0122 14:16:09.169013 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zfggw\" (UniqueName: \"kubernetes.io/projected/fc17a8e9-de13-44d6-aa07-d13560bcd275-kube-api-access-zfggw\") pod \"redhat-operators-hwzrn\" (UID: \"fc17a8e9-de13-44d6-aa07-d13560bcd275\") " pod="openshift-marketplace/redhat-operators-hwzrn" Jan 22 14:16:09 crc kubenswrapper[5099]: I0122 14:16:09.169067 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc17a8e9-de13-44d6-aa07-d13560bcd275-catalog-content\") pod \"redhat-operators-hwzrn\" (UID: \"fc17a8e9-de13-44d6-aa07-d13560bcd275\") " pod="openshift-marketplace/redhat-operators-hwzrn" Jan 22 14:16:09 crc kubenswrapper[5099]: I0122 14:16:09.169697 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc17a8e9-de13-44d6-aa07-d13560bcd275-catalog-content\") pod \"redhat-operators-hwzrn\" (UID: \"fc17a8e9-de13-44d6-aa07-d13560bcd275\") " pod="openshift-marketplace/redhat-operators-hwzrn" Jan 22 14:16:09 crc kubenswrapper[5099]: I0122 14:16:09.170052 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc17a8e9-de13-44d6-aa07-d13560bcd275-utilities\") pod \"redhat-operators-hwzrn\" (UID: \"fc17a8e9-de13-44d6-aa07-d13560bcd275\") " pod="openshift-marketplace/redhat-operators-hwzrn" Jan 22 14:16:09 crc kubenswrapper[5099]: I0122 14:16:09.188660 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfggw\" (UniqueName: \"kubernetes.io/projected/fc17a8e9-de13-44d6-aa07-d13560bcd275-kube-api-access-zfggw\") pod \"redhat-operators-hwzrn\" (UID: \"fc17a8e9-de13-44d6-aa07-d13560bcd275\") " pod="openshift-marketplace/redhat-operators-hwzrn" Jan 22 14:16:09 crc kubenswrapper[5099]: I0122 14:16:09.272697 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-79vth"] Jan 22 14:16:09 crc kubenswrapper[5099]: W0122 14:16:09.295639 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d375ce4_cfbc_4019_a2a4_3f31c8bd10fe.slice/crio-7d39ba3c6874429dbaa60d19eb7873e20874319a39084d2ea71680c990460c28 WatchSource:0}: Error finding container 7d39ba3c6874429dbaa60d19eb7873e20874319a39084d2ea71680c990460c28: Status 404 returned error can't find the container with id 7d39ba3c6874429dbaa60d19eb7873e20874319a39084d2ea71680c990460c28 Jan 22 14:16:09 crc kubenswrapper[5099]: I0122 14:16:09.333399 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hwzrn" Jan 22 14:16:09 crc kubenswrapper[5099]: I0122 14:16:09.554774 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hwzrn"] Jan 22 14:16:09 crc kubenswrapper[5099]: W0122 14:16:09.571933 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc17a8e9_de13_44d6_aa07_d13560bcd275.slice/crio-4faa76a2bd77313d7f4a25f368190c1f0515854c63ab53f68dc139fbfa2af7fc WatchSource:0}: Error finding container 4faa76a2bd77313d7f4a25f368190c1f0515854c63ab53f68dc139fbfa2af7fc: Status 404 returned error can't find the container with id 4faa76a2bd77313d7f4a25f368190c1f0515854c63ab53f68dc139fbfa2af7fc Jan 22 14:16:09 crc kubenswrapper[5099]: I0122 14:16:09.635867 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hwzrn" event={"ID":"fc17a8e9-de13-44d6-aa07-d13560bcd275","Type":"ContainerStarted","Data":"4faa76a2bd77313d7f4a25f368190c1f0515854c63ab53f68dc139fbfa2af7fc"} Jan 22 14:16:09 crc kubenswrapper[5099]: I0122 14:16:09.637829 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-79vth" event={"ID":"2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe","Type":"ContainerStarted","Data":"7d39ba3c6874429dbaa60d19eb7873e20874319a39084d2ea71680c990460c28"} Jan 22 14:16:09 crc kubenswrapper[5099]: I0122 14:16:09.642057 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-psxhg" event={"ID":"00219034-b44a-4db2-ad80-b04ff5eacac5","Type":"ContainerStarted","Data":"a07a00489ee7df323293e4f87c95150d5e885e8bc4f53c635f76d39b8f37e7ba"} Jan 22 14:16:09 crc kubenswrapper[5099]: I0122 14:16:09.645026 5099 generic.go:358] "Generic (PLEG): container finished" podID="7779fac0-5e1b-4d4c-bd8e-299416704f12" containerID="b49c8de0311c5b7869017842e228768c6124d50a7446f083cd00d79212c026f3" exitCode=0 Jan 22 14:16:09 crc kubenswrapper[5099]: I0122 14:16:09.645551 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7w9q" event={"ID":"7779fac0-5e1b-4d4c-bd8e-299416704f12","Type":"ContainerDied","Data":"b49c8de0311c5b7869017842e228768c6124d50a7446f083cd00d79212c026f3"} Jan 22 14:16:09 crc kubenswrapper[5099]: I0122 14:16:09.853082 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" Jan 22 14:16:09 crc kubenswrapper[5099]: I0122 14:16:09.853145 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" Jan 22 14:16:09 crc kubenswrapper[5099]: I0122 14:16:09.860389 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" Jan 22 14:16:09 crc kubenswrapper[5099]: I0122 14:16:09.928240 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-5pr7k" Jan 22 14:16:09 crc kubenswrapper[5099]: I0122 14:16:09.930917 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-5pr7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 14:16:09 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 22 14:16:09 crc kubenswrapper[5099]: [+]process-running ok Jan 22 14:16:09 crc kubenswrapper[5099]: healthz check failed Jan 22 14:16:09 crc kubenswrapper[5099]: I0122 14:16:09.930989 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-5pr7k" podUID="b75d2eb4-efad-414a-8c7e-c64e0f83cb2b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 14:16:10 crc kubenswrapper[5099]: I0122 14:16:10.190122 5099 ???:1] "http: TLS handshake error from 192.168.126.11:51234: no serving certificate available for the kubelet" Jan 22 14:16:10 crc kubenswrapper[5099]: I0122 14:16:10.400802 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-ldzlj" Jan 22 14:16:10 crc kubenswrapper[5099]: I0122 14:16:10.586712 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 14:16:10 crc kubenswrapper[5099]: I0122 14:16:10.654289 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"55b1116d-86c3-4f74-bc6b-a52f91ef5ba2","Type":"ContainerDied","Data":"7974b8752567bca4631f57167468a5ae4afeb5af7c696cafd9b0fbe649741f14"} Jan 22 14:16:10 crc kubenswrapper[5099]: I0122 14:16:10.654343 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7974b8752567bca4631f57167468a5ae4afeb5af7c696cafd9b0fbe649741f14" Jan 22 14:16:10 crc kubenswrapper[5099]: I0122 14:16:10.654464 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 14:16:10 crc kubenswrapper[5099]: I0122 14:16:10.695754 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/55b1116d-86c3-4f74-bc6b-a52f91ef5ba2-kubelet-dir\") pod \"55b1116d-86c3-4f74-bc6b-a52f91ef5ba2\" (UID: \"55b1116d-86c3-4f74-bc6b-a52f91ef5ba2\") " Jan 22 14:16:10 crc kubenswrapper[5099]: I0122 14:16:10.695861 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55b1116d-86c3-4f74-bc6b-a52f91ef5ba2-kube-api-access\") pod \"55b1116d-86c3-4f74-bc6b-a52f91ef5ba2\" (UID: \"55b1116d-86c3-4f74-bc6b-a52f91ef5ba2\") " Jan 22 14:16:10 crc kubenswrapper[5099]: I0122 14:16:10.696079 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55b1116d-86c3-4f74-bc6b-a52f91ef5ba2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "55b1116d-86c3-4f74-bc6b-a52f91ef5ba2" (UID: "55b1116d-86c3-4f74-bc6b-a52f91ef5ba2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 14:16:10 crc kubenswrapper[5099]: I0122 14:16:10.696326 5099 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/55b1116d-86c3-4f74-bc6b-a52f91ef5ba2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:10 crc kubenswrapper[5099]: I0122 14:16:10.702825 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55b1116d-86c3-4f74-bc6b-a52f91ef5ba2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "55b1116d-86c3-4f74-bc6b-a52f91ef5ba2" (UID: "55b1116d-86c3-4f74-bc6b-a52f91ef5ba2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:10 crc kubenswrapper[5099]: I0122 14:16:10.797112 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55b1116d-86c3-4f74-bc6b-a52f91ef5ba2-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:10 crc kubenswrapper[5099]: I0122 14:16:10.834253 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-b8q59" Jan 22 14:16:10 crc kubenswrapper[5099]: I0122 14:16:10.876901 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:10 crc kubenswrapper[5099]: I0122 14:16:10.900980 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-psxhg" podStartSLOduration=106.900954493 podStartE2EDuration="1m46.900954493s" podCreationTimestamp="2026-01-22 14:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:10.900184531 +0000 UTC m=+128.607934938" watchObservedRunningTime="2026-01-22 14:16:10.900954493 +0000 UTC m=+128.608704730" Jan 22 14:16:10 crc kubenswrapper[5099]: I0122 14:16:10.932404 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-5pr7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 14:16:10 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 22 14:16:10 crc kubenswrapper[5099]: [+]process-running ok Jan 22 14:16:10 crc kubenswrapper[5099]: healthz check failed Jan 22 14:16:10 crc kubenswrapper[5099]: I0122 14:16:10.932468 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-5pr7k" podUID="b75d2eb4-efad-414a-8c7e-c64e0f83cb2b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 14:16:11 crc kubenswrapper[5099]: I0122 14:16:11.612185 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 14:16:11 crc kubenswrapper[5099]: I0122 14:16:11.612563 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:16:11 crc kubenswrapper[5099]: I0122 14:16:11.612593 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 14:16:11 crc kubenswrapper[5099]: I0122 14:16:11.612628 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:16:11 crc kubenswrapper[5099]: I0122 14:16:11.613689 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 22 14:16:11 crc kubenswrapper[5099]: I0122 14:16:11.613963 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 22 14:16:11 crc kubenswrapper[5099]: I0122 14:16:11.614438 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 22 14:16:11 crc kubenswrapper[5099]: I0122 14:16:11.628219 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 22 14:16:11 crc kubenswrapper[5099]: I0122 14:16:11.633099 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:16:11 crc kubenswrapper[5099]: I0122 14:16:11.641242 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 14:16:11 crc kubenswrapper[5099]: I0122 14:16:11.641252 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 14:16:11 crc kubenswrapper[5099]: I0122 14:16:11.669970 5099 generic.go:358] "Generic (PLEG): container finished" podID="fc17a8e9-de13-44d6-aa07-d13560bcd275" containerID="c7f288163b683d6100e77e1ecbb23621dfda5dc11302984b12581bbd63d3c6d5" exitCode=0 Jan 22 14:16:11 crc kubenswrapper[5099]: I0122 14:16:11.670085 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hwzrn" event={"ID":"fc17a8e9-de13-44d6-aa07-d13560bcd275","Type":"ContainerDied","Data":"c7f288163b683d6100e77e1ecbb23621dfda5dc11302984b12581bbd63d3c6d5"} Jan 22 14:16:11 crc kubenswrapper[5099]: I0122 14:16:11.672105 5099 generic.go:358] "Generic (PLEG): container finished" podID="2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe" containerID="dc160a6faa2b0b898a91efd66fb06a820808e87e37a7deb472be2abf2dbfedd2" exitCode=0 Jan 22 14:16:11 crc kubenswrapper[5099]: I0122 14:16:11.672364 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-79vth" event={"ID":"2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe","Type":"ContainerDied","Data":"dc160a6faa2b0b898a91efd66fb06a820808e87e37a7deb472be2abf2dbfedd2"} Jan 22 14:16:11 crc kubenswrapper[5099]: I0122 14:16:11.679279 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:16:11 crc kubenswrapper[5099]: I0122 14:16:11.713829 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/47a33b1f-9d8a-4a87-9d5b-15c2b36959df-metrics-certs\") pod \"network-metrics-daemon-6qncx\" (UID: \"47a33b1f-9d8a-4a87-9d5b-15c2b36959df\") " pod="openshift-multus/network-metrics-daemon-6qncx" Jan 22 14:16:11 crc kubenswrapper[5099]: I0122 14:16:11.716372 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 22 14:16:11 crc kubenswrapper[5099]: I0122 14:16:11.727841 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/47a33b1f-9d8a-4a87-9d5b-15c2b36959df-metrics-certs\") pod \"network-metrics-daemon-6qncx\" (UID: \"47a33b1f-9d8a-4a87-9d5b-15c2b36959df\") " pod="openshift-multus/network-metrics-daemon-6qncx" Jan 22 14:16:11 crc kubenswrapper[5099]: I0122 14:16:11.842796 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 14:16:11 crc kubenswrapper[5099]: I0122 14:16:11.864151 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 14:16:11 crc kubenswrapper[5099]: I0122 14:16:11.933358 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-5pr7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 14:16:11 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 22 14:16:11 crc kubenswrapper[5099]: [+]process-running ok Jan 22 14:16:11 crc kubenswrapper[5099]: healthz check failed Jan 22 14:16:11 crc kubenswrapper[5099]: I0122 14:16:11.933445 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-5pr7k" podUID="b75d2eb4-efad-414a-8c7e-c64e0f83cb2b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 14:16:11 crc kubenswrapper[5099]: I0122 14:16:11.960311 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:16:12 crc kubenswrapper[5099]: I0122 14:16:12.023949 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 22 14:16:12 crc kubenswrapper[5099]: I0122 14:16:12.030206 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6qncx" Jan 22 14:16:12 crc kubenswrapper[5099]: I0122 14:16:12.474513 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-qcmvq" Jan 22 14:16:12 crc kubenswrapper[5099]: I0122 14:16:12.930875 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-5pr7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 14:16:12 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 22 14:16:12 crc kubenswrapper[5099]: [+]process-running ok Jan 22 14:16:12 crc kubenswrapper[5099]: healthz check failed Jan 22 14:16:12 crc kubenswrapper[5099]: I0122 14:16:12.931237 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-5pr7k" podUID="b75d2eb4-efad-414a-8c7e-c64e0f83cb2b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 14:16:13 crc kubenswrapper[5099]: E0122 14:16:13.556986 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2fac788e02f10ec85576f9fffa7ba022941fc907ea45deb0ffaacd11fcdcbfc4" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 14:16:13 crc kubenswrapper[5099]: E0122 14:16:13.559297 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2fac788e02f10ec85576f9fffa7ba022941fc907ea45deb0ffaacd11fcdcbfc4" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 14:16:13 crc kubenswrapper[5099]: E0122 14:16:13.561013 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2fac788e02f10ec85576f9fffa7ba022941fc907ea45deb0ffaacd11fcdcbfc4" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 14:16:13 crc kubenswrapper[5099]: E0122 14:16:13.561131 5099 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-g55fr" podUID="4944256e-76fd-4652-80c6-a5f9217aadc3" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 22 14:16:14 crc kubenswrapper[5099]: I0122 14:16:14.248461 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-5pr7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 14:16:14 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 22 14:16:14 crc kubenswrapper[5099]: [+]process-running ok Jan 22 14:16:14 crc kubenswrapper[5099]: healthz check failed Jan 22 14:16:14 crc kubenswrapper[5099]: I0122 14:16:14.248584 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-5pr7k" podUID="b75d2eb4-efad-414a-8c7e-c64e0f83cb2b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 14:16:14 crc kubenswrapper[5099]: I0122 14:16:14.378875 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-9nglq" Jan 22 14:16:14 crc kubenswrapper[5099]: I0122 14:16:14.930494 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-5pr7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 14:16:14 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 22 14:16:14 crc kubenswrapper[5099]: [+]process-running ok Jan 22 14:16:14 crc kubenswrapper[5099]: healthz check failed Jan 22 14:16:14 crc kubenswrapper[5099]: I0122 14:16:14.930573 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-5pr7k" podUID="b75d2eb4-efad-414a-8c7e-c64e0f83cb2b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 14:16:15 crc kubenswrapper[5099]: I0122 14:16:15.345466 5099 ???:1] "http: TLS handshake error from 192.168.126.11:50738: no serving certificate available for the kubelet" Jan 22 14:16:15 crc kubenswrapper[5099]: I0122 14:16:15.931126 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-5pr7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 14:16:15 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 22 14:16:15 crc kubenswrapper[5099]: [+]process-running ok Jan 22 14:16:15 crc kubenswrapper[5099]: healthz check failed Jan 22 14:16:15 crc kubenswrapper[5099]: I0122 14:16:15.931220 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-5pr7k" podUID="b75d2eb4-efad-414a-8c7e-c64e0f83cb2b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 14:16:16 crc kubenswrapper[5099]: I0122 14:16:16.935447 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-5pr7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 14:16:16 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 22 14:16:16 crc kubenswrapper[5099]: [+]process-running ok Jan 22 14:16:16 crc kubenswrapper[5099]: healthz check failed Jan 22 14:16:16 crc kubenswrapper[5099]: I0122 14:16:16.935783 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-5pr7k" podUID="b75d2eb4-efad-414a-8c7e-c64e0f83cb2b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 14:16:17 crc kubenswrapper[5099]: I0122 14:16:17.930384 5099 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-5pr7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 14:16:17 crc kubenswrapper[5099]: [-]has-synced failed: reason withheld Jan 22 14:16:17 crc kubenswrapper[5099]: [+]process-running ok Jan 22 14:16:17 crc kubenswrapper[5099]: healthz check failed Jan 22 14:16:17 crc kubenswrapper[5099]: I0122 14:16:17.930516 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-5pr7k" podUID="b75d2eb4-efad-414a-8c7e-c64e0f83cb2b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 14:16:18 crc kubenswrapper[5099]: I0122 14:16:18.049066 5099 patch_prober.go:28] interesting pod/console-64d44f6ddf-gflg8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 22 14:16:18 crc kubenswrapper[5099]: I0122 14:16:18.049138 5099 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-gflg8" podUID="e07e6e00-cfcc-4513-b231-8d27833d8687" containerName="console" probeResult="failure" output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 22 14:16:18 crc kubenswrapper[5099]: W0122 14:16:18.695819 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17b87002_b798_480a_8e17_83053d698239.slice/crio-6d8f22102688fb720d0f656f85ac1b96bf48aea5a25eebb611c03dfe4dbb0a60 WatchSource:0}: Error finding container 6d8f22102688fb720d0f656f85ac1b96bf48aea5a25eebb611c03dfe4dbb0a60: Status 404 returned error can't find the container with id 6d8f22102688fb720d0f656f85ac1b96bf48aea5a25eebb611c03dfe4dbb0a60 Jan 22 14:16:18 crc kubenswrapper[5099]: I0122 14:16:18.728238 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"6d8f22102688fb720d0f656f85ac1b96bf48aea5a25eebb611c03dfe4dbb0a60"} Jan 22 14:16:18 crc kubenswrapper[5099]: I0122 14:16:18.915596 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-6qncx"] Jan 22 14:16:18 crc kubenswrapper[5099]: I0122 14:16:18.932576 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-5pr7k" Jan 22 14:16:18 crc kubenswrapper[5099]: I0122 14:16:18.936605 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-5pr7k" Jan 22 14:16:18 crc kubenswrapper[5099]: W0122 14:16:18.945470 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47a33b1f_9d8a_4a87_9d5b_15c2b36959df.slice/crio-acd5ab4a0f7b5d8b3653ba555cc10a707ec47eab4122fe1436eb7c7a04e404ee WatchSource:0}: Error finding container acd5ab4a0f7b5d8b3653ba555cc10a707ec47eab4122fe1436eb7c7a04e404ee: Status 404 returned error can't find the container with id acd5ab4a0f7b5d8b3653ba555cc10a707ec47eab4122fe1436eb7c7a04e404ee Jan 22 14:16:19 crc kubenswrapper[5099]: W0122 14:16:19.054916 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a9ae5f6_97bd_46ac_bafa_ca1b4452a141.slice/crio-ad02b268310311eba1bf5cc991fbbcaa4a8d181b10119dd5d8d8bd226a24daf5 WatchSource:0}: Error finding container ad02b268310311eba1bf5cc991fbbcaa4a8d181b10119dd5d8d8bd226a24daf5: Status 404 returned error can't find the container with id ad02b268310311eba1bf5cc991fbbcaa4a8d181b10119dd5d8d8bd226a24daf5 Jan 22 14:16:19 crc kubenswrapper[5099]: I0122 14:16:19.337268 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rwglj" Jan 22 14:16:19 crc kubenswrapper[5099]: I0122 14:16:19.737608 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"8d974bb3f91f188b3549c5fc3f077d85146c61db501ab22811d68e04e2ad0494"} Jan 22 14:16:19 crc kubenswrapper[5099]: I0122 14:16:19.738143 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"ad02b268310311eba1bf5cc991fbbcaa4a8d181b10119dd5d8d8bd226a24daf5"} Jan 22 14:16:19 crc kubenswrapper[5099]: I0122 14:16:19.741086 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"e066097184f03569554016b4e2d54dfbba5718eec936be4ec550b1c38897c81f"} Jan 22 14:16:19 crc kubenswrapper[5099]: I0122 14:16:19.741142 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"68fea0f9d801d7ae09113175522f204eb3339b7cc59b335bf17ccfd3843abd28"} Jan 22 14:16:19 crc kubenswrapper[5099]: I0122 14:16:19.743560 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-87mz9" event={"ID":"1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9","Type":"ContainerStarted","Data":"842bd0cff1fc1c00454f305adcbe0df8c22af0edef5e5b008c8a5bbf192b666e"} Jan 22 14:16:19 crc kubenswrapper[5099]: I0122 14:16:19.746552 5099 generic.go:358] "Generic (PLEG): container finished" podID="e94546d4-9e10-402a-9fc3-2a1c8f755713" containerID="ce7fb68fa977b1322d84a8da323752f6426f9aba91b2335663450f3926bcc562" exitCode=0 Jan 22 14:16:19 crc kubenswrapper[5099]: I0122 14:16:19.746709 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwp6p" event={"ID":"e94546d4-9e10-402a-9fc3-2a1c8f755713","Type":"ContainerDied","Data":"ce7fb68fa977b1322d84a8da323752f6426f9aba91b2335663450f3926bcc562"} Jan 22 14:16:19 crc kubenswrapper[5099]: I0122 14:16:19.770303 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"3ab629e95e02c78e1eedc83402404023282ba7eb554ce6fcf05cca8d79a77f57"} Jan 22 14:16:19 crc kubenswrapper[5099]: I0122 14:16:19.785054 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-6qncx" event={"ID":"47a33b1f-9d8a-4a87-9d5b-15c2b36959df","Type":"ContainerStarted","Data":"cf6a0035786ee3ecba7c4a7f81d1b8490a4117a85c8b31e10eb7621e57c01619"} Jan 22 14:16:19 crc kubenswrapper[5099]: I0122 14:16:19.785108 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-6qncx" event={"ID":"47a33b1f-9d8a-4a87-9d5b-15c2b36959df","Type":"ContainerStarted","Data":"acd5ab4a0f7b5d8b3653ba555cc10a707ec47eab4122fe1436eb7c7a04e404ee"} Jan 22 14:16:19 crc kubenswrapper[5099]: I0122 14:16:19.795089 5099 generic.go:358] "Generic (PLEG): container finished" podID="7779fac0-5e1b-4d4c-bd8e-299416704f12" containerID="3fe57eb691a53f95dd29c529f6a7fbac280204dd7f8d60858ca24aecc60b24cb" exitCode=0 Jan 22 14:16:19 crc kubenswrapper[5099]: I0122 14:16:19.795212 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7w9q" event={"ID":"7779fac0-5e1b-4d4c-bd8e-299416704f12","Type":"ContainerDied","Data":"3fe57eb691a53f95dd29c529f6a7fbac280204dd7f8d60858ca24aecc60b24cb"} Jan 22 14:16:19 crc kubenswrapper[5099]: I0122 14:16:19.802041 5099 generic.go:358] "Generic (PLEG): container finished" podID="b3266538-9050-43ad-a3d6-7428f83aa788" containerID="31c12304712834620597add05cc9d17c1d39d5079b9a1fe7dd3e839fa1f36a6a" exitCode=0 Jan 22 14:16:19 crc kubenswrapper[5099]: I0122 14:16:19.802131 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gj5rr" event={"ID":"b3266538-9050-43ad-a3d6-7428f83aa788","Type":"ContainerDied","Data":"31c12304712834620597add05cc9d17c1d39d5079b9a1fe7dd3e839fa1f36a6a"} Jan 22 14:16:19 crc kubenswrapper[5099]: I0122 14:16:19.812965 5099 generic.go:358] "Generic (PLEG): container finished" podID="83ff52d5-f127-494f-b2bd-e9a98e556392" containerID="e6653d3f379301bd29eb0ccbcfa53e558de537ba1698457cf43f4ecec43f78cc" exitCode=0 Jan 22 14:16:19 crc kubenswrapper[5099]: I0122 14:16:19.813138 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7vcn" event={"ID":"83ff52d5-f127-494f-b2bd-e9a98e556392","Type":"ContainerDied","Data":"e6653d3f379301bd29eb0ccbcfa53e558de537ba1698457cf43f4ecec43f78cc"} Jan 22 14:16:19 crc kubenswrapper[5099]: I0122 14:16:19.816085 5099 generic.go:358] "Generic (PLEG): container finished" podID="fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5" containerID="56890c8575c7e18f97b76157cd0321e71a76208b3fe3c4ac87953525fb9d74bd" exitCode=0 Jan 22 14:16:19 crc kubenswrapper[5099]: I0122 14:16:19.816183 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jcl5d" event={"ID":"fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5","Type":"ContainerDied","Data":"56890c8575c7e18f97b76157cd0321e71a76208b3fe3c4ac87953525fb9d74bd"} Jan 22 14:16:20 crc kubenswrapper[5099]: I0122 14:16:20.847635 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jcl5d" event={"ID":"fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5","Type":"ContainerStarted","Data":"62533b9c54a55bfe9392ffcee50f3c4d61f22d499560e1d25dab231a56766b1f"} Jan 22 14:16:20 crc kubenswrapper[5099]: I0122 14:16:20.854816 5099 generic.go:358] "Generic (PLEG): container finished" podID="1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9" containerID="842bd0cff1fc1c00454f305adcbe0df8c22af0edef5e5b008c8a5bbf192b666e" exitCode=0 Jan 22 14:16:20 crc kubenswrapper[5099]: I0122 14:16:20.854891 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-87mz9" event={"ID":"1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9","Type":"ContainerDied","Data":"842bd0cff1fc1c00454f305adcbe0df8c22af0edef5e5b008c8a5bbf192b666e"} Jan 22 14:16:20 crc kubenswrapper[5099]: I0122 14:16:20.865988 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwp6p" event={"ID":"e94546d4-9e10-402a-9fc3-2a1c8f755713","Type":"ContainerStarted","Data":"90623c73fc5da9c095559cdb9f5dc28a1f357a1dfbf9b8766447dcb1b0e7cc4b"} Jan 22 14:16:20 crc kubenswrapper[5099]: I0122 14:16:20.875473 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-6qncx" event={"ID":"47a33b1f-9d8a-4a87-9d5b-15c2b36959df","Type":"ContainerStarted","Data":"8be71c8c47aee1a49acee5979323d7fea3f1f0a8db7f74ddcba2fe190ea5db67"} Jan 22 14:16:20 crc kubenswrapper[5099]: I0122 14:16:20.881011 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7w9q" event={"ID":"7779fac0-5e1b-4d4c-bd8e-299416704f12","Type":"ContainerStarted","Data":"b4b82faffed4bf4f28f191d48a526e6051823d892f28f74ff6bf8bae67025144"} Jan 22 14:16:20 crc kubenswrapper[5099]: I0122 14:16:20.888736 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gj5rr" event={"ID":"b3266538-9050-43ad-a3d6-7428f83aa788","Type":"ContainerStarted","Data":"e2e79a21b653fa291a83ffbd0afbf6aef0c182483e0edc63548d01f498d0318b"} Jan 22 14:16:20 crc kubenswrapper[5099]: I0122 14:16:20.891716 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7vcn" event={"ID":"83ff52d5-f127-494f-b2bd-e9a98e556392","Type":"ContainerStarted","Data":"3d4d20ace4b753579568eceacae90a039b99e88e71a31847cf092dcc0f046f93"} Jan 22 14:16:20 crc kubenswrapper[5099]: I0122 14:16:20.899797 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jcl5d" podStartSLOduration=3.587227489 podStartE2EDuration="15.899772081s" podCreationTimestamp="2026-01-22 14:16:05 +0000 UTC" firstStartedPulling="2026-01-22 14:16:06.452178059 +0000 UTC m=+124.159928296" lastFinishedPulling="2026-01-22 14:16:18.764722651 +0000 UTC m=+136.472472888" observedRunningTime="2026-01-22 14:16:20.873294174 +0000 UTC m=+138.581044431" watchObservedRunningTime="2026-01-22 14:16:20.899772081 +0000 UTC m=+138.607522318" Jan 22 14:16:21 crc kubenswrapper[5099]: I0122 14:16:21.310013 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 14:16:21 crc kubenswrapper[5099]: I0122 14:16:21.329846 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w7w9q" podStartSLOduration=5.083621888 podStartE2EDuration="14.329827453s" podCreationTimestamp="2026-01-22 14:16:07 +0000 UTC" firstStartedPulling="2026-01-22 14:16:09.646493036 +0000 UTC m=+127.354243273" lastFinishedPulling="2026-01-22 14:16:18.892698601 +0000 UTC m=+136.600448838" observedRunningTime="2026-01-22 14:16:21.32704915 +0000 UTC m=+139.034799397" watchObservedRunningTime="2026-01-22 14:16:21.329827453 +0000 UTC m=+139.037577690" Jan 22 14:16:21 crc kubenswrapper[5099]: I0122 14:16:21.361092 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gj5rr" podStartSLOduration=4.025073733 podStartE2EDuration="16.361075597s" podCreationTimestamp="2026-01-22 14:16:05 +0000 UTC" firstStartedPulling="2026-01-22 14:16:06.436279247 +0000 UTC m=+124.144029474" lastFinishedPulling="2026-01-22 14:16:18.772281101 +0000 UTC m=+136.480031338" observedRunningTime="2026-01-22 14:16:21.345896857 +0000 UTC m=+139.053647104" watchObservedRunningTime="2026-01-22 14:16:21.361075597 +0000 UTC m=+139.068825824" Jan 22 14:16:21 crc kubenswrapper[5099]: I0122 14:16:21.364581 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-6qncx" podStartSLOduration=118.364569988 podStartE2EDuration="1m58.364569988s" podCreationTimestamp="2026-01-22 14:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:21.360586793 +0000 UTC m=+139.068337040" watchObservedRunningTime="2026-01-22 14:16:21.364569988 +0000 UTC m=+139.072320225" Jan 22 14:16:21 crc kubenswrapper[5099]: I0122 14:16:21.386636 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mwp6p" podStartSLOduration=5.11519923 podStartE2EDuration="16.386619889s" podCreationTimestamp="2026-01-22 14:16:05 +0000 UTC" firstStartedPulling="2026-01-22 14:16:07.493246411 +0000 UTC m=+125.200996648" lastFinishedPulling="2026-01-22 14:16:18.76466707 +0000 UTC m=+136.472417307" observedRunningTime="2026-01-22 14:16:21.384485432 +0000 UTC m=+139.092235669" watchObservedRunningTime="2026-01-22 14:16:21.386619889 +0000 UTC m=+139.094370126" Jan 22 14:16:21 crc kubenswrapper[5099]: I0122 14:16:21.899466 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-87mz9" event={"ID":"1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9","Type":"ContainerStarted","Data":"30d3055da67bb4a94518bd133bb5fc92d8dd57dfcfd1904f88a446a4af34a07c"} Jan 22 14:16:21 crc kubenswrapper[5099]: I0122 14:16:21.917871 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-87mz9" podStartSLOduration=5.630586363 podStartE2EDuration="16.917849235s" podCreationTimestamp="2026-01-22 14:16:05 +0000 UTC" firstStartedPulling="2026-01-22 14:16:07.485616104 +0000 UTC m=+125.193366341" lastFinishedPulling="2026-01-22 14:16:18.772878976 +0000 UTC m=+136.480629213" observedRunningTime="2026-01-22 14:16:21.916954992 +0000 UTC m=+139.624705239" watchObservedRunningTime="2026-01-22 14:16:21.917849235 +0000 UTC m=+139.625599472" Jan 22 14:16:21 crc kubenswrapper[5099]: I0122 14:16:21.919384 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k7vcn" podStartSLOduration=4.775614148 podStartE2EDuration="14.919373635s" podCreationTimestamp="2026-01-22 14:16:07 +0000 UTC" firstStartedPulling="2026-01-22 14:16:08.631675297 +0000 UTC m=+126.339425534" lastFinishedPulling="2026-01-22 14:16:18.775434784 +0000 UTC m=+136.483185021" observedRunningTime="2026-01-22 14:16:21.403429122 +0000 UTC m=+139.111179369" watchObservedRunningTime="2026-01-22 14:16:21.919373635 +0000 UTC m=+139.627123872" Jan 22 14:16:23 crc kubenswrapper[5099]: E0122 14:16:23.555818 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2fac788e02f10ec85576f9fffa7ba022941fc907ea45deb0ffaacd11fcdcbfc4" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 14:16:23 crc kubenswrapper[5099]: E0122 14:16:23.557248 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2fac788e02f10ec85576f9fffa7ba022941fc907ea45deb0ffaacd11fcdcbfc4" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 14:16:23 crc kubenswrapper[5099]: E0122 14:16:23.558873 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2fac788e02f10ec85576f9fffa7ba022941fc907ea45deb0ffaacd11fcdcbfc4" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 14:16:23 crc kubenswrapper[5099]: E0122 14:16:23.558971 5099 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-g55fr" podUID="4944256e-76fd-4652-80c6-a5f9217aadc3" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 22 14:16:25 crc kubenswrapper[5099]: I0122 14:16:25.478282 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jcl5d" Jan 22 14:16:25 crc kubenswrapper[5099]: I0122 14:16:25.478672 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-jcl5d" Jan 22 14:16:25 crc kubenswrapper[5099]: I0122 14:16:25.611508 5099 ???:1] "http: TLS handshake error from 192.168.126.11:55206: no serving certificate available for the kubelet" Jan 22 14:16:25 crc kubenswrapper[5099]: I0122 14:16:25.616932 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jcl5d" Jan 22 14:16:25 crc kubenswrapper[5099]: I0122 14:16:25.697873 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gj5rr" Jan 22 14:16:25 crc kubenswrapper[5099]: I0122 14:16:25.697949 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-gj5rr" Jan 22 14:16:25 crc kubenswrapper[5099]: I0122 14:16:25.742702 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gj5rr" Jan 22 14:16:25 crc kubenswrapper[5099]: I0122 14:16:25.904298 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-87mz9" Jan 22 14:16:25 crc kubenswrapper[5099]: I0122 14:16:25.904359 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-87mz9" Jan 22 14:16:25 crc kubenswrapper[5099]: I0122 14:16:25.968718 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-87mz9" Jan 22 14:16:25 crc kubenswrapper[5099]: I0122 14:16:25.984278 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gj5rr" Jan 22 14:16:25 crc kubenswrapper[5099]: I0122 14:16:25.995346 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jcl5d" Jan 22 14:16:26 crc kubenswrapper[5099]: I0122 14:16:26.088653 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mwp6p" Jan 22 14:16:26 crc kubenswrapper[5099]: I0122 14:16:26.089252 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-mwp6p" Jan 22 14:16:26 crc kubenswrapper[5099]: I0122 14:16:26.136677 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mwp6p" Jan 22 14:16:26 crc kubenswrapper[5099]: I0122 14:16:26.982485 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mwp6p" Jan 22 14:16:27 crc kubenswrapper[5099]: I0122 14:16:27.481720 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k7vcn" Jan 22 14:16:27 crc kubenswrapper[5099]: I0122 14:16:27.482137 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-k7vcn" Jan 22 14:16:27 crc kubenswrapper[5099]: I0122 14:16:27.523842 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k7vcn" Jan 22 14:16:27 crc kubenswrapper[5099]: I0122 14:16:27.926595 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-w7w9q" Jan 22 14:16:27 crc kubenswrapper[5099]: I0122 14:16:27.926677 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w7w9q" Jan 22 14:16:27 crc kubenswrapper[5099]: I0122 14:16:27.984985 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w7w9q" Jan 22 14:16:28 crc kubenswrapper[5099]: I0122 14:16:28.039706 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k7vcn" Jan 22 14:16:28 crc kubenswrapper[5099]: I0122 14:16:28.050242 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w7w9q" Jan 22 14:16:28 crc kubenswrapper[5099]: I0122 14:16:28.053745 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-gflg8" Jan 22 14:16:28 crc kubenswrapper[5099]: I0122 14:16:28.058404 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-gflg8" Jan 22 14:16:28 crc kubenswrapper[5099]: I0122 14:16:28.147969 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mwp6p"] Jan 22 14:16:29 crc kubenswrapper[5099]: I0122 14:16:29.960784 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mwp6p" podUID="e94546d4-9e10-402a-9fc3-2a1c8f755713" containerName="registry-server" containerID="cri-o://90623c73fc5da9c095559cdb9f5dc28a1f357a1dfbf9b8766447dcb1b0e7cc4b" gracePeriod=2 Jan 22 14:16:30 crc kubenswrapper[5099]: I0122 14:16:30.543738 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7w9q"] Jan 22 14:16:30 crc kubenswrapper[5099]: I0122 14:16:30.544064 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w7w9q" podUID="7779fac0-5e1b-4d4c-bd8e-299416704f12" containerName="registry-server" containerID="cri-o://b4b82faffed4bf4f28f191d48a526e6051823d892f28f74ff6bf8bae67025144" gracePeriod=2 Jan 22 14:16:31 crc kubenswrapper[5099]: I0122 14:16:31.974032 5099 generic.go:358] "Generic (PLEG): container finished" podID="e94546d4-9e10-402a-9fc3-2a1c8f755713" containerID="90623c73fc5da9c095559cdb9f5dc28a1f357a1dfbf9b8766447dcb1b0e7cc4b" exitCode=0 Jan 22 14:16:31 crc kubenswrapper[5099]: I0122 14:16:31.974211 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwp6p" event={"ID":"e94546d4-9e10-402a-9fc3-2a1c8f755713","Type":"ContainerDied","Data":"90623c73fc5da9c095559cdb9f5dc28a1f357a1dfbf9b8766447dcb1b0e7cc4b"} Jan 22 14:16:31 crc kubenswrapper[5099]: I0122 14:16:31.977109 5099 generic.go:358] "Generic (PLEG): container finished" podID="7779fac0-5e1b-4d4c-bd8e-299416704f12" containerID="b4b82faffed4bf4f28f191d48a526e6051823d892f28f74ff6bf8bae67025144" exitCode=0 Jan 22 14:16:31 crc kubenswrapper[5099]: I0122 14:16:31.977253 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7w9q" event={"ID":"7779fac0-5e1b-4d4c-bd8e-299416704f12","Type":"ContainerDied","Data":"b4b82faffed4bf4f28f191d48a526e6051823d892f28f74ff6bf8bae67025144"} Jan 22 14:16:32 crc kubenswrapper[5099]: I0122 14:16:32.682417 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:16:33 crc kubenswrapper[5099]: E0122 14:16:33.556570 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2fac788e02f10ec85576f9fffa7ba022941fc907ea45deb0ffaacd11fcdcbfc4" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 14:16:33 crc kubenswrapper[5099]: E0122 14:16:33.557608 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2fac788e02f10ec85576f9fffa7ba022941fc907ea45deb0ffaacd11fcdcbfc4" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 14:16:33 crc kubenswrapper[5099]: E0122 14:16:33.559444 5099 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2fac788e02f10ec85576f9fffa7ba022941fc907ea45deb0ffaacd11fcdcbfc4" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 14:16:33 crc kubenswrapper[5099]: E0122 14:16:33.559518 5099 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-g55fr" podUID="4944256e-76fd-4652-80c6-a5f9217aadc3" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 22 14:16:34 crc kubenswrapper[5099]: I0122 14:16:34.387524 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-b798r" Jan 22 14:16:34 crc kubenswrapper[5099]: I0122 14:16:34.789744 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w7w9q" Jan 22 14:16:34 crc kubenswrapper[5099]: I0122 14:16:34.842598 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mwp6p" Jan 22 14:16:34 crc kubenswrapper[5099]: I0122 14:16:34.884493 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7779fac0-5e1b-4d4c-bd8e-299416704f12-catalog-content\") pod \"7779fac0-5e1b-4d4c-bd8e-299416704f12\" (UID: \"7779fac0-5e1b-4d4c-bd8e-299416704f12\") " Jan 22 14:16:34 crc kubenswrapper[5099]: I0122 14:16:34.884836 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4jvm\" (UniqueName: \"kubernetes.io/projected/7779fac0-5e1b-4d4c-bd8e-299416704f12-kube-api-access-w4jvm\") pod \"7779fac0-5e1b-4d4c-bd8e-299416704f12\" (UID: \"7779fac0-5e1b-4d4c-bd8e-299416704f12\") " Jan 22 14:16:34 crc kubenswrapper[5099]: I0122 14:16:34.884955 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7779fac0-5e1b-4d4c-bd8e-299416704f12-utilities\") pod \"7779fac0-5e1b-4d4c-bd8e-299416704f12\" (UID: \"7779fac0-5e1b-4d4c-bd8e-299416704f12\") " Jan 22 14:16:34 crc kubenswrapper[5099]: I0122 14:16:34.892832 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7779fac0-5e1b-4d4c-bd8e-299416704f12-utilities" (OuterVolumeSpecName: "utilities") pod "7779fac0-5e1b-4d4c-bd8e-299416704f12" (UID: "7779fac0-5e1b-4d4c-bd8e-299416704f12"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:34 crc kubenswrapper[5099]: I0122 14:16:34.894221 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7779fac0-5e1b-4d4c-bd8e-299416704f12-kube-api-access-w4jvm" (OuterVolumeSpecName: "kube-api-access-w4jvm") pod "7779fac0-5e1b-4d4c-bd8e-299416704f12" (UID: "7779fac0-5e1b-4d4c-bd8e-299416704f12"). InnerVolumeSpecName "kube-api-access-w4jvm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:34 crc kubenswrapper[5099]: I0122 14:16:34.901452 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7779fac0-5e1b-4d4c-bd8e-299416704f12-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7779fac0-5e1b-4d4c-bd8e-299416704f12" (UID: "7779fac0-5e1b-4d4c-bd8e-299416704f12"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:34 crc kubenswrapper[5099]: I0122 14:16:34.986440 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dn8ct\" (UniqueName: \"kubernetes.io/projected/e94546d4-9e10-402a-9fc3-2a1c8f755713-kube-api-access-dn8ct\") pod \"e94546d4-9e10-402a-9fc3-2a1c8f755713\" (UID: \"e94546d4-9e10-402a-9fc3-2a1c8f755713\") " Jan 22 14:16:34 crc kubenswrapper[5099]: I0122 14:16:34.986512 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e94546d4-9e10-402a-9fc3-2a1c8f755713-catalog-content\") pod \"e94546d4-9e10-402a-9fc3-2a1c8f755713\" (UID: \"e94546d4-9e10-402a-9fc3-2a1c8f755713\") " Jan 22 14:16:34 crc kubenswrapper[5099]: I0122 14:16:34.986603 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e94546d4-9e10-402a-9fc3-2a1c8f755713-utilities\") pod \"e94546d4-9e10-402a-9fc3-2a1c8f755713\" (UID: \"e94546d4-9e10-402a-9fc3-2a1c8f755713\") " Jan 22 14:16:34 crc kubenswrapper[5099]: I0122 14:16:34.987144 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w4jvm\" (UniqueName: \"kubernetes.io/projected/7779fac0-5e1b-4d4c-bd8e-299416704f12-kube-api-access-w4jvm\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:34 crc kubenswrapper[5099]: I0122 14:16:34.987201 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7779fac0-5e1b-4d4c-bd8e-299416704f12-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:34 crc kubenswrapper[5099]: I0122 14:16:34.987213 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7779fac0-5e1b-4d4c-bd8e-299416704f12-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:34 crc kubenswrapper[5099]: I0122 14:16:34.987535 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e94546d4-9e10-402a-9fc3-2a1c8f755713-utilities" (OuterVolumeSpecName: "utilities") pod "e94546d4-9e10-402a-9fc3-2a1c8f755713" (UID: "e94546d4-9e10-402a-9fc3-2a1c8f755713"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:34 crc kubenswrapper[5099]: I0122 14:16:34.989596 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e94546d4-9e10-402a-9fc3-2a1c8f755713-kube-api-access-dn8ct" (OuterVolumeSpecName: "kube-api-access-dn8ct") pod "e94546d4-9e10-402a-9fc3-2a1c8f755713" (UID: "e94546d4-9e10-402a-9fc3-2a1c8f755713"). InnerVolumeSpecName "kube-api-access-dn8ct". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:35 crc kubenswrapper[5099]: I0122 14:16:35.000490 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mwp6p" Jan 22 14:16:35 crc kubenswrapper[5099]: I0122 14:16:35.000683 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwp6p" event={"ID":"e94546d4-9e10-402a-9fc3-2a1c8f755713","Type":"ContainerDied","Data":"c08103ba40b3fd85dcdc2a6a0e7fc423803621fcf77e35aa16af201b3cb99c57"} Jan 22 14:16:35 crc kubenswrapper[5099]: I0122 14:16:35.000798 5099 scope.go:117] "RemoveContainer" containerID="90623c73fc5da9c095559cdb9f5dc28a1f357a1dfbf9b8766447dcb1b0e7cc4b" Jan 22 14:16:35 crc kubenswrapper[5099]: I0122 14:16:35.003341 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7w9q" event={"ID":"7779fac0-5e1b-4d4c-bd8e-299416704f12","Type":"ContainerDied","Data":"8132dbab6c682f8f528b5ee34e29a0865d991f97f26e1f46f0680714c5efd2e1"} Jan 22 14:16:35 crc kubenswrapper[5099]: I0122 14:16:35.003423 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w7w9q" Jan 22 14:16:35 crc kubenswrapper[5099]: I0122 14:16:35.023384 5099 scope.go:117] "RemoveContainer" containerID="ce7fb68fa977b1322d84a8da323752f6426f9aba91b2335663450f3926bcc562" Jan 22 14:16:35 crc kubenswrapper[5099]: I0122 14:16:35.027561 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7w9q"] Jan 22 14:16:35 crc kubenswrapper[5099]: I0122 14:16:35.030111 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7w9q"] Jan 22 14:16:35 crc kubenswrapper[5099]: I0122 14:16:35.049535 5099 scope.go:117] "RemoveContainer" containerID="c67d05b65211f33407c27af6faa05bf265e897e0c27969a55f35cb2e73c70ffd" Jan 22 14:16:35 crc kubenswrapper[5099]: I0122 14:16:35.061317 5099 scope.go:117] "RemoveContainer" containerID="b4b82faffed4bf4f28f191d48a526e6051823d892f28f74ff6bf8bae67025144" Jan 22 14:16:35 crc kubenswrapper[5099]: I0122 14:16:35.074312 5099 scope.go:117] "RemoveContainer" containerID="3fe57eb691a53f95dd29c529f6a7fbac280204dd7f8d60858ca24aecc60b24cb" Jan 22 14:16:35 crc kubenswrapper[5099]: I0122 14:16:35.088627 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e94546d4-9e10-402a-9fc3-2a1c8f755713-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:35 crc kubenswrapper[5099]: I0122 14:16:35.088659 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dn8ct\" (UniqueName: \"kubernetes.io/projected/e94546d4-9e10-402a-9fc3-2a1c8f755713-kube-api-access-dn8ct\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:35 crc kubenswrapper[5099]: I0122 14:16:35.088726 5099 scope.go:117] "RemoveContainer" containerID="b49c8de0311c5b7869017842e228768c6124d50a7446f083cd00d79212c026f3" Jan 22 14:16:35 crc kubenswrapper[5099]: I0122 14:16:35.279591 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e94546d4-9e10-402a-9fc3-2a1c8f755713-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e94546d4-9e10-402a-9fc3-2a1c8f755713" (UID: "e94546d4-9e10-402a-9fc3-2a1c8f755713"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:35 crc kubenswrapper[5099]: I0122 14:16:35.292500 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e94546d4-9e10-402a-9fc3-2a1c8f755713-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:35 crc kubenswrapper[5099]: I0122 14:16:35.341589 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mwp6p"] Jan 22 14:16:35 crc kubenswrapper[5099]: I0122 14:16:35.344434 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mwp6p"] Jan 22 14:16:36 crc kubenswrapper[5099]: I0122 14:16:36.004461 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-87mz9" Jan 22 14:16:36 crc kubenswrapper[5099]: I0122 14:16:36.768531 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7779fac0-5e1b-4d4c-bd8e-299416704f12" path="/var/lib/kubelet/pods/7779fac0-5e1b-4d4c-bd8e-299416704f12/volumes" Jan 22 14:16:36 crc kubenswrapper[5099]: I0122 14:16:36.769197 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e94546d4-9e10-402a-9fc3-2a1c8f755713" path="/var/lib/kubelet/pods/e94546d4-9e10-402a-9fc3-2a1c8f755713/volumes" Jan 22 14:16:37 crc kubenswrapper[5099]: I0122 14:16:37.019155 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-g55fr_4944256e-76fd-4652-80c6-a5f9217aadc3/kube-multus-additional-cni-plugins/0.log" Jan 22 14:16:37 crc kubenswrapper[5099]: I0122 14:16:37.019219 5099 generic.go:358] "Generic (PLEG): container finished" podID="4944256e-76fd-4652-80c6-a5f9217aadc3" containerID="2fac788e02f10ec85576f9fffa7ba022941fc907ea45deb0ffaacd11fcdcbfc4" exitCode=137 Jan 22 14:16:37 crc kubenswrapper[5099]: I0122 14:16:37.019257 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-g55fr" event={"ID":"4944256e-76fd-4652-80c6-a5f9217aadc3","Type":"ContainerDied","Data":"2fac788e02f10ec85576f9fffa7ba022941fc907ea45deb0ffaacd11fcdcbfc4"} Jan 22 14:16:38 crc kubenswrapper[5099]: I0122 14:16:38.111931 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-g55fr_4944256e-76fd-4652-80c6-a5f9217aadc3/kube-multus-additional-cni-plugins/0.log" Jan 22 14:16:38 crc kubenswrapper[5099]: I0122 14:16:38.112380 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-g55fr" Jan 22 14:16:38 crc kubenswrapper[5099]: I0122 14:16:38.233286 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/4944256e-76fd-4652-80c6-a5f9217aadc3-ready\") pod \"4944256e-76fd-4652-80c6-a5f9217aadc3\" (UID: \"4944256e-76fd-4652-80c6-a5f9217aadc3\") " Jan 22 14:16:38 crc kubenswrapper[5099]: I0122 14:16:38.233495 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4944256e-76fd-4652-80c6-a5f9217aadc3-cni-sysctl-allowlist\") pod \"4944256e-76fd-4652-80c6-a5f9217aadc3\" (UID: \"4944256e-76fd-4652-80c6-a5f9217aadc3\") " Jan 22 14:16:38 crc kubenswrapper[5099]: I0122 14:16:38.233594 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwstw\" (UniqueName: \"kubernetes.io/projected/4944256e-76fd-4652-80c6-a5f9217aadc3-kube-api-access-jwstw\") pod \"4944256e-76fd-4652-80c6-a5f9217aadc3\" (UID: \"4944256e-76fd-4652-80c6-a5f9217aadc3\") " Jan 22 14:16:38 crc kubenswrapper[5099]: I0122 14:16:38.233677 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4944256e-76fd-4652-80c6-a5f9217aadc3-tuning-conf-dir\") pod \"4944256e-76fd-4652-80c6-a5f9217aadc3\" (UID: \"4944256e-76fd-4652-80c6-a5f9217aadc3\") " Jan 22 14:16:38 crc kubenswrapper[5099]: I0122 14:16:38.233933 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4944256e-76fd-4652-80c6-a5f9217aadc3-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "4944256e-76fd-4652-80c6-a5f9217aadc3" (UID: "4944256e-76fd-4652-80c6-a5f9217aadc3"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 14:16:38 crc kubenswrapper[5099]: I0122 14:16:38.234525 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4944256e-76fd-4652-80c6-a5f9217aadc3-ready" (OuterVolumeSpecName: "ready") pod "4944256e-76fd-4652-80c6-a5f9217aadc3" (UID: "4944256e-76fd-4652-80c6-a5f9217aadc3"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:38 crc kubenswrapper[5099]: I0122 14:16:38.234828 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4944256e-76fd-4652-80c6-a5f9217aadc3-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "4944256e-76fd-4652-80c6-a5f9217aadc3" (UID: "4944256e-76fd-4652-80c6-a5f9217aadc3"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:38 crc kubenswrapper[5099]: I0122 14:16:38.241050 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4944256e-76fd-4652-80c6-a5f9217aadc3-kube-api-access-jwstw" (OuterVolumeSpecName: "kube-api-access-jwstw") pod "4944256e-76fd-4652-80c6-a5f9217aadc3" (UID: "4944256e-76fd-4652-80c6-a5f9217aadc3"). InnerVolumeSpecName "kube-api-access-jwstw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:38 crc kubenswrapper[5099]: I0122 14:16:38.334764 5099 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4944256e-76fd-4652-80c6-a5f9217aadc3-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:38 crc kubenswrapper[5099]: I0122 14:16:38.334808 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jwstw\" (UniqueName: \"kubernetes.io/projected/4944256e-76fd-4652-80c6-a5f9217aadc3-kube-api-access-jwstw\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:38 crc kubenswrapper[5099]: I0122 14:16:38.334817 5099 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4944256e-76fd-4652-80c6-a5f9217aadc3-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:38 crc kubenswrapper[5099]: I0122 14:16:38.334825 5099 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/4944256e-76fd-4652-80c6-a5f9217aadc3-ready\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:38 crc kubenswrapper[5099]: I0122 14:16:38.539241 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-87mz9"] Jan 22 14:16:38 crc kubenswrapper[5099]: I0122 14:16:38.539563 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-87mz9" podUID="1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9" containerName="registry-server" containerID="cri-o://30d3055da67bb4a94518bd133bb5fc92d8dd57dfcfd1904f88a446a4af34a07c" gracePeriod=2 Jan 22 14:16:39 crc kubenswrapper[5099]: I0122 14:16:39.034989 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-g55fr_4944256e-76fd-4652-80c6-a5f9217aadc3/kube-multus-additional-cni-plugins/0.log" Jan 22 14:16:39 crc kubenswrapper[5099]: I0122 14:16:39.035326 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-g55fr" event={"ID":"4944256e-76fd-4652-80c6-a5f9217aadc3","Type":"ContainerDied","Data":"788ce4c84386b1303573e7749da6b949dc86c3897ab7a00b3d6af877e6aca734"} Jan 22 14:16:39 crc kubenswrapper[5099]: I0122 14:16:39.035404 5099 scope.go:117] "RemoveContainer" containerID="2fac788e02f10ec85576f9fffa7ba022941fc907ea45deb0ffaacd11fcdcbfc4" Jan 22 14:16:39 crc kubenswrapper[5099]: I0122 14:16:39.035630 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-g55fr" Jan 22 14:16:39 crc kubenswrapper[5099]: I0122 14:16:39.067080 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-g55fr"] Jan 22 14:16:39 crc kubenswrapper[5099]: I0122 14:16:39.075030 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-g55fr"] Jan 22 14:16:40 crc kubenswrapper[5099]: I0122 14:16:40.768825 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4944256e-76fd-4652-80c6-a5f9217aadc3" path="/var/lib/kubelet/pods/4944256e-76fd-4652-80c6-a5f9217aadc3/volumes" Jan 22 14:16:41 crc kubenswrapper[5099]: I0122 14:16:41.101673 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 22 14:16:41 crc kubenswrapper[5099]: I0122 14:16:41.102652 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="55b1116d-86c3-4f74-bc6b-a52f91ef5ba2" containerName="pruner" Jan 22 14:16:41 crc kubenswrapper[5099]: I0122 14:16:41.102678 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="55b1116d-86c3-4f74-bc6b-a52f91ef5ba2" containerName="pruner" Jan 22 14:16:41 crc kubenswrapper[5099]: I0122 14:16:41.102693 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4944256e-76fd-4652-80c6-a5f9217aadc3" containerName="kube-multus-additional-cni-plugins" Jan 22 14:16:41 crc kubenswrapper[5099]: I0122 14:16:41.102723 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="4944256e-76fd-4652-80c6-a5f9217aadc3" containerName="kube-multus-additional-cni-plugins" Jan 22 14:16:41 crc kubenswrapper[5099]: I0122 14:16:41.102737 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e94546d4-9e10-402a-9fc3-2a1c8f755713" containerName="extract-content" Jan 22 14:16:41 crc kubenswrapper[5099]: I0122 14:16:41.102744 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="e94546d4-9e10-402a-9fc3-2a1c8f755713" containerName="extract-content" Jan 22 14:16:41 crc kubenswrapper[5099]: I0122 14:16:41.102752 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e94546d4-9e10-402a-9fc3-2a1c8f755713" containerName="registry-server" Jan 22 14:16:41 crc kubenswrapper[5099]: I0122 14:16:41.102761 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="e94546d4-9e10-402a-9fc3-2a1c8f755713" containerName="registry-server" Jan 22 14:16:41 crc kubenswrapper[5099]: I0122 14:16:41.102815 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7779fac0-5e1b-4d4c-bd8e-299416704f12" containerName="extract-content" Jan 22 14:16:41 crc kubenswrapper[5099]: I0122 14:16:41.102824 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="7779fac0-5e1b-4d4c-bd8e-299416704f12" containerName="extract-content" Jan 22 14:16:41 crc kubenswrapper[5099]: I0122 14:16:41.102835 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e94546d4-9e10-402a-9fc3-2a1c8f755713" containerName="extract-utilities" Jan 22 14:16:41 crc kubenswrapper[5099]: I0122 14:16:41.102842 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="e94546d4-9e10-402a-9fc3-2a1c8f755713" containerName="extract-utilities" Jan 22 14:16:41 crc kubenswrapper[5099]: I0122 14:16:41.102853 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7779fac0-5e1b-4d4c-bd8e-299416704f12" containerName="extract-utilities" Jan 22 14:16:41 crc kubenswrapper[5099]: I0122 14:16:41.102860 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="7779fac0-5e1b-4d4c-bd8e-299416704f12" containerName="extract-utilities" Jan 22 14:16:41 crc kubenswrapper[5099]: I0122 14:16:41.102893 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7779fac0-5e1b-4d4c-bd8e-299416704f12" containerName="registry-server" Jan 22 14:16:41 crc kubenswrapper[5099]: I0122 14:16:41.102901 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="7779fac0-5e1b-4d4c-bd8e-299416704f12" containerName="registry-server" Jan 22 14:16:41 crc kubenswrapper[5099]: I0122 14:16:41.103061 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="4944256e-76fd-4652-80c6-a5f9217aadc3" containerName="kube-multus-additional-cni-plugins" Jan 22 14:16:41 crc kubenswrapper[5099]: I0122 14:16:41.103080 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="7779fac0-5e1b-4d4c-bd8e-299416704f12" containerName="registry-server" Jan 22 14:16:41 crc kubenswrapper[5099]: I0122 14:16:41.103091 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="e94546d4-9e10-402a-9fc3-2a1c8f755713" containerName="registry-server" Jan 22 14:16:41 crc kubenswrapper[5099]: I0122 14:16:41.103122 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="55b1116d-86c3-4f74-bc6b-a52f91ef5ba2" containerName="pruner" Jan 22 14:16:41 crc kubenswrapper[5099]: I0122 14:16:41.111742 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 14:16:41 crc kubenswrapper[5099]: I0122 14:16:41.112408 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 22 14:16:41 crc kubenswrapper[5099]: I0122 14:16:41.114138 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 22 14:16:41 crc kubenswrapper[5099]: I0122 14:16:41.114392 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 22 14:16:41 crc kubenswrapper[5099]: I0122 14:16:41.188529 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ffcc0ebe-ecb3-4f11-93c6-5f6853d97a88-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"ffcc0ebe-ecb3-4f11-93c6-5f6853d97a88\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 14:16:41 crc kubenswrapper[5099]: I0122 14:16:41.188608 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ffcc0ebe-ecb3-4f11-93c6-5f6853d97a88-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"ffcc0ebe-ecb3-4f11-93c6-5f6853d97a88\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 14:16:41 crc kubenswrapper[5099]: I0122 14:16:41.290185 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ffcc0ebe-ecb3-4f11-93c6-5f6853d97a88-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"ffcc0ebe-ecb3-4f11-93c6-5f6853d97a88\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 14:16:41 crc kubenswrapper[5099]: I0122 14:16:41.290238 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ffcc0ebe-ecb3-4f11-93c6-5f6853d97a88-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"ffcc0ebe-ecb3-4f11-93c6-5f6853d97a88\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 14:16:41 crc kubenswrapper[5099]: I0122 14:16:41.290297 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ffcc0ebe-ecb3-4f11-93c6-5f6853d97a88-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"ffcc0ebe-ecb3-4f11-93c6-5f6853d97a88\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 14:16:41 crc kubenswrapper[5099]: I0122 14:16:41.318123 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ffcc0ebe-ecb3-4f11-93c6-5f6853d97a88-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"ffcc0ebe-ecb3-4f11-93c6-5f6853d97a88\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 14:16:41 crc kubenswrapper[5099]: I0122 14:16:41.510701 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 14:16:41 crc kubenswrapper[5099]: I0122 14:16:41.701239 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 22 14:16:42 crc kubenswrapper[5099]: I0122 14:16:42.065208 5099 generic.go:358] "Generic (PLEG): container finished" podID="1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9" containerID="30d3055da67bb4a94518bd133bb5fc92d8dd57dfcfd1904f88a446a4af34a07c" exitCode=0 Jan 22 14:16:42 crc kubenswrapper[5099]: I0122 14:16:42.065274 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-87mz9" event={"ID":"1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9","Type":"ContainerDied","Data":"30d3055da67bb4a94518bd133bb5fc92d8dd57dfcfd1904f88a446a4af34a07c"} Jan 22 14:16:42 crc kubenswrapper[5099]: I0122 14:16:42.067309 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-79vth" event={"ID":"2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe","Type":"ContainerStarted","Data":"096fd23928ef15c4eea4dee2724aefcc2a297e9a3d56e1d546c15e23388d92b9"} Jan 22 14:16:42 crc kubenswrapper[5099]: I0122 14:16:42.068308 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"ffcc0ebe-ecb3-4f11-93c6-5f6853d97a88","Type":"ContainerStarted","Data":"cad08c85d6fd27043129ab9f3ce62ca13b9a1ff7bf669b577f984bc4b6a6749b"} Jan 22 14:16:42 crc kubenswrapper[5099]: I0122 14:16:42.069551 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hwzrn" event={"ID":"fc17a8e9-de13-44d6-aa07-d13560bcd275","Type":"ContainerStarted","Data":"865f56fc1201ed8a3367ba1ae19e76dcabf87f6ff620363010b046684c839137"} Jan 22 14:16:42 crc kubenswrapper[5099]: I0122 14:16:42.204893 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-87mz9" Jan 22 14:16:42 crc kubenswrapper[5099]: I0122 14:16:42.301491 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9-utilities\") pod \"1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9\" (UID: \"1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9\") " Jan 22 14:16:42 crc kubenswrapper[5099]: I0122 14:16:42.301572 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9-catalog-content\") pod \"1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9\" (UID: \"1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9\") " Jan 22 14:16:42 crc kubenswrapper[5099]: I0122 14:16:42.301639 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjnsj\" (UniqueName: \"kubernetes.io/projected/1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9-kube-api-access-xjnsj\") pod \"1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9\" (UID: \"1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9\") " Jan 22 14:16:42 crc kubenswrapper[5099]: I0122 14:16:42.312924 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9-kube-api-access-xjnsj" (OuterVolumeSpecName: "kube-api-access-xjnsj") pod "1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9" (UID: "1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9"). InnerVolumeSpecName "kube-api-access-xjnsj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:42 crc kubenswrapper[5099]: I0122 14:16:42.314287 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9-utilities" (OuterVolumeSpecName: "utilities") pod "1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9" (UID: "1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:42 crc kubenswrapper[5099]: I0122 14:16:42.358883 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9" (UID: "1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:42 crc kubenswrapper[5099]: I0122 14:16:42.403501 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:42 crc kubenswrapper[5099]: I0122 14:16:42.403544 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:42 crc kubenswrapper[5099]: I0122 14:16:42.403560 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xjnsj\" (UniqueName: \"kubernetes.io/projected/1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9-kube-api-access-xjnsj\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:43 crc kubenswrapper[5099]: I0122 14:16:43.079154 5099 generic.go:358] "Generic (PLEG): container finished" podID="fc17a8e9-de13-44d6-aa07-d13560bcd275" containerID="865f56fc1201ed8a3367ba1ae19e76dcabf87f6ff620363010b046684c839137" exitCode=0 Jan 22 14:16:43 crc kubenswrapper[5099]: I0122 14:16:43.079374 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hwzrn" event={"ID":"fc17a8e9-de13-44d6-aa07-d13560bcd275","Type":"ContainerDied","Data":"865f56fc1201ed8a3367ba1ae19e76dcabf87f6ff620363010b046684c839137"} Jan 22 14:16:43 crc kubenswrapper[5099]: I0122 14:16:43.081791 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-87mz9" event={"ID":"1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9","Type":"ContainerDied","Data":"8684956b0f6dadb4ddbe93d707b7ba9ccce9fbc6d182ecd1d05787d9db148cbb"} Jan 22 14:16:43 crc kubenswrapper[5099]: I0122 14:16:43.081831 5099 scope.go:117] "RemoveContainer" containerID="30d3055da67bb4a94518bd133bb5fc92d8dd57dfcfd1904f88a446a4af34a07c" Jan 22 14:16:43 crc kubenswrapper[5099]: I0122 14:16:43.081971 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-87mz9" Jan 22 14:16:43 crc kubenswrapper[5099]: I0122 14:16:43.086858 5099 generic.go:358] "Generic (PLEG): container finished" podID="2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe" containerID="096fd23928ef15c4eea4dee2724aefcc2a297e9a3d56e1d546c15e23388d92b9" exitCode=0 Jan 22 14:16:43 crc kubenswrapper[5099]: I0122 14:16:43.086992 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-79vth" event={"ID":"2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe","Type":"ContainerDied","Data":"096fd23928ef15c4eea4dee2724aefcc2a297e9a3d56e1d546c15e23388d92b9"} Jan 22 14:16:43 crc kubenswrapper[5099]: I0122 14:16:43.089910 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"ffcc0ebe-ecb3-4f11-93c6-5f6853d97a88","Type":"ContainerStarted","Data":"4350c11f15cb1ec713f6fd4d6648731833a9c1cb5331b81870d0907e8f981804"} Jan 22 14:16:43 crc kubenswrapper[5099]: I0122 14:16:43.122396 5099 scope.go:117] "RemoveContainer" containerID="842bd0cff1fc1c00454f305adcbe0df8c22af0edef5e5b008c8a5bbf192b666e" Jan 22 14:16:43 crc kubenswrapper[5099]: I0122 14:16:43.140449 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=2.140429445 podStartE2EDuration="2.140429445s" podCreationTimestamp="2026-01-22 14:16:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:43.139622133 +0000 UTC m=+160.847372390" watchObservedRunningTime="2026-01-22 14:16:43.140429445 +0000 UTC m=+160.848179682" Jan 22 14:16:43 crc kubenswrapper[5099]: I0122 14:16:43.158657 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-87mz9"] Jan 22 14:16:43 crc kubenswrapper[5099]: I0122 14:16:43.160979 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-87mz9"] Jan 22 14:16:43 crc kubenswrapper[5099]: I0122 14:16:43.173361 5099 scope.go:117] "RemoveContainer" containerID="bde296aef27edbe12383a5817cfbe46783277527091cc76426a71226f9c9cf22" Jan 22 14:16:44 crc kubenswrapper[5099]: I0122 14:16:44.098766 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-79vth" event={"ID":"2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe","Type":"ContainerStarted","Data":"7a5610d6ce5903652f450a646ba9a4e976a3a9fbd9cde3f249e070dc9b370ca8"} Jan 22 14:16:44 crc kubenswrapper[5099]: I0122 14:16:44.099936 5099 generic.go:358] "Generic (PLEG): container finished" podID="ffcc0ebe-ecb3-4f11-93c6-5f6853d97a88" containerID="4350c11f15cb1ec713f6fd4d6648731833a9c1cb5331b81870d0907e8f981804" exitCode=0 Jan 22 14:16:44 crc kubenswrapper[5099]: I0122 14:16:44.100023 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"ffcc0ebe-ecb3-4f11-93c6-5f6853d97a88","Type":"ContainerDied","Data":"4350c11f15cb1ec713f6fd4d6648731833a9c1cb5331b81870d0907e8f981804"} Jan 22 14:16:44 crc kubenswrapper[5099]: I0122 14:16:44.102193 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hwzrn" event={"ID":"fc17a8e9-de13-44d6-aa07-d13560bcd275","Type":"ContainerStarted","Data":"d2a7f4ca2da603774ab59872f8e9bb9db2e59638b1de9bb09d6c2598c7a3ddd0"} Jan 22 14:16:44 crc kubenswrapper[5099]: I0122 14:16:44.118655 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-79vth" podStartSLOduration=6.609418325 podStartE2EDuration="36.118636777s" podCreationTimestamp="2026-01-22 14:16:08 +0000 UTC" firstStartedPulling="2026-01-22 14:16:11.673090511 +0000 UTC m=+129.380840748" lastFinishedPulling="2026-01-22 14:16:41.182308963 +0000 UTC m=+158.890059200" observedRunningTime="2026-01-22 14:16:44.118015651 +0000 UTC m=+161.825765888" watchObservedRunningTime="2026-01-22 14:16:44.118636777 +0000 UTC m=+161.826387014" Jan 22 14:16:44 crc kubenswrapper[5099]: I0122 14:16:44.149971 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hwzrn" podStartSLOduration=7.624976417 podStartE2EDuration="36.149946922s" podCreationTimestamp="2026-01-22 14:16:08 +0000 UTC" firstStartedPulling="2026-01-22 14:16:12.678848875 +0000 UTC m=+130.386599132" lastFinishedPulling="2026-01-22 14:16:41.2038194 +0000 UTC m=+158.911569637" observedRunningTime="2026-01-22 14:16:44.146611294 +0000 UTC m=+161.854361531" watchObservedRunningTime="2026-01-22 14:16:44.149946922 +0000 UTC m=+161.857697159" Jan 22 14:16:44 crc kubenswrapper[5099]: I0122 14:16:44.778056 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9" path="/var/lib/kubelet/pods/1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9/volumes" Jan 22 14:16:45 crc kubenswrapper[5099]: I0122 14:16:45.339642 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 14:16:45 crc kubenswrapper[5099]: I0122 14:16:45.440547 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ffcc0ebe-ecb3-4f11-93c6-5f6853d97a88-kubelet-dir\") pod \"ffcc0ebe-ecb3-4f11-93c6-5f6853d97a88\" (UID: \"ffcc0ebe-ecb3-4f11-93c6-5f6853d97a88\") " Jan 22 14:16:45 crc kubenswrapper[5099]: I0122 14:16:45.440719 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ffcc0ebe-ecb3-4f11-93c6-5f6853d97a88-kube-api-access\") pod \"ffcc0ebe-ecb3-4f11-93c6-5f6853d97a88\" (UID: \"ffcc0ebe-ecb3-4f11-93c6-5f6853d97a88\") " Jan 22 14:16:45 crc kubenswrapper[5099]: I0122 14:16:45.440776 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffcc0ebe-ecb3-4f11-93c6-5f6853d97a88-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ffcc0ebe-ecb3-4f11-93c6-5f6853d97a88" (UID: "ffcc0ebe-ecb3-4f11-93c6-5f6853d97a88"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 14:16:45 crc kubenswrapper[5099]: I0122 14:16:45.440979 5099 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ffcc0ebe-ecb3-4f11-93c6-5f6853d97a88-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:45 crc kubenswrapper[5099]: I0122 14:16:45.447307 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffcc0ebe-ecb3-4f11-93c6-5f6853d97a88-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ffcc0ebe-ecb3-4f11-93c6-5f6853d97a88" (UID: "ffcc0ebe-ecb3-4f11-93c6-5f6853d97a88"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:45 crc kubenswrapper[5099]: I0122 14:16:45.542284 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ffcc0ebe-ecb3-4f11-93c6-5f6853d97a88-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:46 crc kubenswrapper[5099]: I0122 14:16:46.114639 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"ffcc0ebe-ecb3-4f11-93c6-5f6853d97a88","Type":"ContainerDied","Data":"cad08c85d6fd27043129ab9f3ce62ca13b9a1ff7bf669b577f984bc4b6a6749b"} Jan 22 14:16:46 crc kubenswrapper[5099]: I0122 14:16:46.114680 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cad08c85d6fd27043129ab9f3ce62ca13b9a1ff7bf669b577f984bc4b6a6749b" Jan 22 14:16:46 crc kubenswrapper[5099]: I0122 14:16:46.114750 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 14:16:46 crc kubenswrapper[5099]: I0122 14:16:46.118731 5099 ???:1] "http: TLS handshake error from 192.168.126.11:53628: no serving certificate available for the kubelet" Jan 22 14:16:48 crc kubenswrapper[5099]: I0122 14:16:48.506271 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 22 14:16:48 crc kubenswrapper[5099]: I0122 14:16:48.507507 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9" containerName="extract-utilities" Jan 22 14:16:48 crc kubenswrapper[5099]: I0122 14:16:48.507523 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9" containerName="extract-utilities" Jan 22 14:16:48 crc kubenswrapper[5099]: I0122 14:16:48.507533 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ffcc0ebe-ecb3-4f11-93c6-5f6853d97a88" containerName="pruner" Jan 22 14:16:48 crc kubenswrapper[5099]: I0122 14:16:48.507540 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffcc0ebe-ecb3-4f11-93c6-5f6853d97a88" containerName="pruner" Jan 22 14:16:48 crc kubenswrapper[5099]: I0122 14:16:48.507561 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9" containerName="extract-content" Jan 22 14:16:48 crc kubenswrapper[5099]: I0122 14:16:48.507569 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9" containerName="extract-content" Jan 22 14:16:48 crc kubenswrapper[5099]: I0122 14:16:48.507578 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9" containerName="registry-server" Jan 22 14:16:48 crc kubenswrapper[5099]: I0122 14:16:48.507584 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9" containerName="registry-server" Jan 22 14:16:48 crc kubenswrapper[5099]: I0122 14:16:48.507723 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="1a3e9623-ffb0-4f51-bba2-9e56a1b3ddb9" containerName="registry-server" Jan 22 14:16:48 crc kubenswrapper[5099]: I0122 14:16:48.507746 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="ffcc0ebe-ecb3-4f11-93c6-5f6853d97a88" containerName="pruner" Jan 22 14:16:48 crc kubenswrapper[5099]: I0122 14:16:48.514932 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 22 14:16:48 crc kubenswrapper[5099]: I0122 14:16:48.515140 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 22 14:16:48 crc kubenswrapper[5099]: I0122 14:16:48.523357 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 22 14:16:48 crc kubenswrapper[5099]: I0122 14:16:48.524018 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 22 14:16:48 crc kubenswrapper[5099]: I0122 14:16:48.577581 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79c94e89-2eb8-43af-9059-83ee27755a7d-kube-api-access\") pod \"installer-12-crc\" (UID: \"79c94e89-2eb8-43af-9059-83ee27755a7d\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 14:16:48 crc kubenswrapper[5099]: I0122 14:16:48.577644 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/79c94e89-2eb8-43af-9059-83ee27755a7d-kubelet-dir\") pod \"installer-12-crc\" (UID: \"79c94e89-2eb8-43af-9059-83ee27755a7d\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 14:16:48 crc kubenswrapper[5099]: I0122 14:16:48.577734 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/79c94e89-2eb8-43af-9059-83ee27755a7d-var-lock\") pod \"installer-12-crc\" (UID: \"79c94e89-2eb8-43af-9059-83ee27755a7d\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 14:16:48 crc kubenswrapper[5099]: I0122 14:16:48.678700 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79c94e89-2eb8-43af-9059-83ee27755a7d-kube-api-access\") pod \"installer-12-crc\" (UID: \"79c94e89-2eb8-43af-9059-83ee27755a7d\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 14:16:48 crc kubenswrapper[5099]: I0122 14:16:48.678774 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/79c94e89-2eb8-43af-9059-83ee27755a7d-kubelet-dir\") pod \"installer-12-crc\" (UID: \"79c94e89-2eb8-43af-9059-83ee27755a7d\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 14:16:48 crc kubenswrapper[5099]: I0122 14:16:48.678842 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/79c94e89-2eb8-43af-9059-83ee27755a7d-var-lock\") pod \"installer-12-crc\" (UID: \"79c94e89-2eb8-43af-9059-83ee27755a7d\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 14:16:48 crc kubenswrapper[5099]: I0122 14:16:48.678954 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/79c94e89-2eb8-43af-9059-83ee27755a7d-kubelet-dir\") pod \"installer-12-crc\" (UID: \"79c94e89-2eb8-43af-9059-83ee27755a7d\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 14:16:48 crc kubenswrapper[5099]: I0122 14:16:48.678975 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/79c94e89-2eb8-43af-9059-83ee27755a7d-var-lock\") pod \"installer-12-crc\" (UID: \"79c94e89-2eb8-43af-9059-83ee27755a7d\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 14:16:48 crc kubenswrapper[5099]: I0122 14:16:48.699876 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79c94e89-2eb8-43af-9059-83ee27755a7d-kube-api-access\") pod \"installer-12-crc\" (UID: \"79c94e89-2eb8-43af-9059-83ee27755a7d\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 14:16:48 crc kubenswrapper[5099]: I0122 14:16:48.836263 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 22 14:16:48 crc kubenswrapper[5099]: I0122 14:16:48.915381 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-79vth" Jan 22 14:16:48 crc kubenswrapper[5099]: I0122 14:16:48.915701 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-79vth" Jan 22 14:16:48 crc kubenswrapper[5099]: I0122 14:16:48.971287 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-79vth" Jan 22 14:16:49 crc kubenswrapper[5099]: I0122 14:16:49.168647 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-79vth" Jan 22 14:16:49 crc kubenswrapper[5099]: I0122 14:16:49.312208 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 22 14:16:49 crc kubenswrapper[5099]: I0122 14:16:49.334054 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hwzrn" Jan 22 14:16:49 crc kubenswrapper[5099]: I0122 14:16:49.334095 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-hwzrn" Jan 22 14:16:49 crc kubenswrapper[5099]: I0122 14:16:49.374790 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hwzrn" Jan 22 14:16:50 crc kubenswrapper[5099]: I0122 14:16:50.141349 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"79c94e89-2eb8-43af-9059-83ee27755a7d","Type":"ContainerStarted","Data":"376978eb6341a691bcdd68c2cd7bab25071d47f7ca27032d86718e59bb70516b"} Jan 22 14:16:50 crc kubenswrapper[5099]: I0122 14:16:50.143104 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"79c94e89-2eb8-43af-9059-83ee27755a7d","Type":"ContainerStarted","Data":"69194b7dbb27dbdb30a73c5a05bde40d764c78ed1bc1bba978b8371a20393661"} Jan 22 14:16:50 crc kubenswrapper[5099]: I0122 14:16:50.164385 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=2.164350354 podStartE2EDuration="2.164350354s" podCreationTimestamp="2026-01-22 14:16:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:50.159736493 +0000 UTC m=+167.867486730" watchObservedRunningTime="2026-01-22 14:16:50.164350354 +0000 UTC m=+167.872100631" Jan 22 14:16:50 crc kubenswrapper[5099]: I0122 14:16:50.222036 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hwzrn" Jan 22 14:16:51 crc kubenswrapper[5099]: I0122 14:16:51.200666 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hwzrn"] Jan 22 14:16:51 crc kubenswrapper[5099]: I0122 14:16:51.904973 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 14:16:52 crc kubenswrapper[5099]: I0122 14:16:52.151371 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hwzrn" podUID="fc17a8e9-de13-44d6-aa07-d13560bcd275" containerName="registry-server" containerID="cri-o://d2a7f4ca2da603774ab59872f8e9bb9db2e59638b1de9bb09d6c2598c7a3ddd0" gracePeriod=2 Jan 22 14:16:52 crc kubenswrapper[5099]: I0122 14:16:52.520538 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hwzrn" Jan 22 14:16:52 crc kubenswrapper[5099]: I0122 14:16:52.640091 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc17a8e9-de13-44d6-aa07-d13560bcd275-catalog-content\") pod \"fc17a8e9-de13-44d6-aa07-d13560bcd275\" (UID: \"fc17a8e9-de13-44d6-aa07-d13560bcd275\") " Jan 22 14:16:52 crc kubenswrapper[5099]: I0122 14:16:52.640257 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc17a8e9-de13-44d6-aa07-d13560bcd275-utilities\") pod \"fc17a8e9-de13-44d6-aa07-d13560bcd275\" (UID: \"fc17a8e9-de13-44d6-aa07-d13560bcd275\") " Jan 22 14:16:52 crc kubenswrapper[5099]: I0122 14:16:52.640356 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfggw\" (UniqueName: \"kubernetes.io/projected/fc17a8e9-de13-44d6-aa07-d13560bcd275-kube-api-access-zfggw\") pod \"fc17a8e9-de13-44d6-aa07-d13560bcd275\" (UID: \"fc17a8e9-de13-44d6-aa07-d13560bcd275\") " Jan 22 14:16:52 crc kubenswrapper[5099]: I0122 14:16:52.641857 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc17a8e9-de13-44d6-aa07-d13560bcd275-utilities" (OuterVolumeSpecName: "utilities") pod "fc17a8e9-de13-44d6-aa07-d13560bcd275" (UID: "fc17a8e9-de13-44d6-aa07-d13560bcd275"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:52 crc kubenswrapper[5099]: I0122 14:16:52.647835 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc17a8e9-de13-44d6-aa07-d13560bcd275-kube-api-access-zfggw" (OuterVolumeSpecName: "kube-api-access-zfggw") pod "fc17a8e9-de13-44d6-aa07-d13560bcd275" (UID: "fc17a8e9-de13-44d6-aa07-d13560bcd275"). InnerVolumeSpecName "kube-api-access-zfggw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:52 crc kubenswrapper[5099]: I0122 14:16:52.741663 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc17a8e9-de13-44d6-aa07-d13560bcd275-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:52 crc kubenswrapper[5099]: I0122 14:16:52.741720 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zfggw\" (UniqueName: \"kubernetes.io/projected/fc17a8e9-de13-44d6-aa07-d13560bcd275-kube-api-access-zfggw\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:52 crc kubenswrapper[5099]: I0122 14:16:52.759003 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc17a8e9-de13-44d6-aa07-d13560bcd275-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc17a8e9-de13-44d6-aa07-d13560bcd275" (UID: "fc17a8e9-de13-44d6-aa07-d13560bcd275"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:52 crc kubenswrapper[5099]: I0122 14:16:52.842988 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc17a8e9-de13-44d6-aa07-d13560bcd275-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:53 crc kubenswrapper[5099]: I0122 14:16:53.159978 5099 generic.go:358] "Generic (PLEG): container finished" podID="fc17a8e9-de13-44d6-aa07-d13560bcd275" containerID="d2a7f4ca2da603774ab59872f8e9bb9db2e59638b1de9bb09d6c2598c7a3ddd0" exitCode=0 Jan 22 14:16:53 crc kubenswrapper[5099]: I0122 14:16:53.160100 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hwzrn" event={"ID":"fc17a8e9-de13-44d6-aa07-d13560bcd275","Type":"ContainerDied","Data":"d2a7f4ca2da603774ab59872f8e9bb9db2e59638b1de9bb09d6c2598c7a3ddd0"} Jan 22 14:16:53 crc kubenswrapper[5099]: I0122 14:16:53.160237 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hwzrn" event={"ID":"fc17a8e9-de13-44d6-aa07-d13560bcd275","Type":"ContainerDied","Data":"4faa76a2bd77313d7f4a25f368190c1f0515854c63ab53f68dc139fbfa2af7fc"} Jan 22 14:16:53 crc kubenswrapper[5099]: I0122 14:16:53.160269 5099 scope.go:117] "RemoveContainer" containerID="d2a7f4ca2da603774ab59872f8e9bb9db2e59638b1de9bb09d6c2598c7a3ddd0" Jan 22 14:16:53 crc kubenswrapper[5099]: I0122 14:16:53.160587 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hwzrn" Jan 22 14:16:53 crc kubenswrapper[5099]: I0122 14:16:53.181739 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hwzrn"] Jan 22 14:16:53 crc kubenswrapper[5099]: I0122 14:16:53.182919 5099 scope.go:117] "RemoveContainer" containerID="865f56fc1201ed8a3367ba1ae19e76dcabf87f6ff620363010b046684c839137" Jan 22 14:16:53 crc kubenswrapper[5099]: I0122 14:16:53.187296 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hwzrn"] Jan 22 14:16:53 crc kubenswrapper[5099]: I0122 14:16:53.202208 5099 scope.go:117] "RemoveContainer" containerID="c7f288163b683d6100e77e1ecbb23621dfda5dc11302984b12581bbd63d3c6d5" Jan 22 14:16:53 crc kubenswrapper[5099]: I0122 14:16:53.218462 5099 scope.go:117] "RemoveContainer" containerID="d2a7f4ca2da603774ab59872f8e9bb9db2e59638b1de9bb09d6c2598c7a3ddd0" Jan 22 14:16:53 crc kubenswrapper[5099]: E0122 14:16:53.219009 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2a7f4ca2da603774ab59872f8e9bb9db2e59638b1de9bb09d6c2598c7a3ddd0\": container with ID starting with d2a7f4ca2da603774ab59872f8e9bb9db2e59638b1de9bb09d6c2598c7a3ddd0 not found: ID does not exist" containerID="d2a7f4ca2da603774ab59872f8e9bb9db2e59638b1de9bb09d6c2598c7a3ddd0" Jan 22 14:16:53 crc kubenswrapper[5099]: I0122 14:16:53.219080 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2a7f4ca2da603774ab59872f8e9bb9db2e59638b1de9bb09d6c2598c7a3ddd0"} err="failed to get container status \"d2a7f4ca2da603774ab59872f8e9bb9db2e59638b1de9bb09d6c2598c7a3ddd0\": rpc error: code = NotFound desc = could not find container \"d2a7f4ca2da603774ab59872f8e9bb9db2e59638b1de9bb09d6c2598c7a3ddd0\": container with ID starting with d2a7f4ca2da603774ab59872f8e9bb9db2e59638b1de9bb09d6c2598c7a3ddd0 not found: ID does not exist" Jan 22 14:16:53 crc kubenswrapper[5099]: I0122 14:16:53.219152 5099 scope.go:117] "RemoveContainer" containerID="865f56fc1201ed8a3367ba1ae19e76dcabf87f6ff620363010b046684c839137" Jan 22 14:16:53 crc kubenswrapper[5099]: E0122 14:16:53.219538 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"865f56fc1201ed8a3367ba1ae19e76dcabf87f6ff620363010b046684c839137\": container with ID starting with 865f56fc1201ed8a3367ba1ae19e76dcabf87f6ff620363010b046684c839137 not found: ID does not exist" containerID="865f56fc1201ed8a3367ba1ae19e76dcabf87f6ff620363010b046684c839137" Jan 22 14:16:53 crc kubenswrapper[5099]: I0122 14:16:53.219572 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"865f56fc1201ed8a3367ba1ae19e76dcabf87f6ff620363010b046684c839137"} err="failed to get container status \"865f56fc1201ed8a3367ba1ae19e76dcabf87f6ff620363010b046684c839137\": rpc error: code = NotFound desc = could not find container \"865f56fc1201ed8a3367ba1ae19e76dcabf87f6ff620363010b046684c839137\": container with ID starting with 865f56fc1201ed8a3367ba1ae19e76dcabf87f6ff620363010b046684c839137 not found: ID does not exist" Jan 22 14:16:53 crc kubenswrapper[5099]: I0122 14:16:53.219591 5099 scope.go:117] "RemoveContainer" containerID="c7f288163b683d6100e77e1ecbb23621dfda5dc11302984b12581bbd63d3c6d5" Jan 22 14:16:53 crc kubenswrapper[5099]: E0122 14:16:53.220250 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7f288163b683d6100e77e1ecbb23621dfda5dc11302984b12581bbd63d3c6d5\": container with ID starting with c7f288163b683d6100e77e1ecbb23621dfda5dc11302984b12581bbd63d3c6d5 not found: ID does not exist" containerID="c7f288163b683d6100e77e1ecbb23621dfda5dc11302984b12581bbd63d3c6d5" Jan 22 14:16:53 crc kubenswrapper[5099]: I0122 14:16:53.220369 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7f288163b683d6100e77e1ecbb23621dfda5dc11302984b12581bbd63d3c6d5"} err="failed to get container status \"c7f288163b683d6100e77e1ecbb23621dfda5dc11302984b12581bbd63d3c6d5\": rpc error: code = NotFound desc = could not find container \"c7f288163b683d6100e77e1ecbb23621dfda5dc11302984b12581bbd63d3c6d5\": container with ID starting with c7f288163b683d6100e77e1ecbb23621dfda5dc11302984b12581bbd63d3c6d5 not found: ID does not exist" Jan 22 14:16:54 crc kubenswrapper[5099]: I0122 14:16:54.768181 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc17a8e9-de13-44d6-aa07-d13560bcd275" path="/var/lib/kubelet/pods/fc17a8e9-de13-44d6-aa07-d13560bcd275/volumes" Jan 22 14:17:06 crc kubenswrapper[5099]: I0122 14:17:06.927827 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-pfh7d"] Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.098228 5099 ???:1] "http: TLS handshake error from 192.168.126.11:51618: no serving certificate available for the kubelet" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.731741 5099 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.732376 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fc17a8e9-de13-44d6-aa07-d13560bcd275" containerName="registry-server" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.732390 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc17a8e9-de13-44d6-aa07-d13560bcd275" containerName="registry-server" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.732411 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fc17a8e9-de13-44d6-aa07-d13560bcd275" containerName="extract-content" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.732418 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc17a8e9-de13-44d6-aa07-d13560bcd275" containerName="extract-content" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.732442 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fc17a8e9-de13-44d6-aa07-d13560bcd275" containerName="extract-utilities" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.732450 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc17a8e9-de13-44d6-aa07-d13560bcd275" containerName="extract-utilities" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.732559 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="fc17a8e9-de13-44d6-aa07-d13560bcd275" containerName="registry-server" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.762150 5099 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.762406 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.762751 5099 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.763536 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://6adc2225720223e57146af518d38a71d01a5a7d3c93cb4c4b55e20bf0cebbff0" gracePeriod=15 Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.763572 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://d24c1ca52fc7eb4263fd15ac005d5a88732f554fa7e633ae037611088c865751" gracePeriod=15 Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.763646 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://d9b88a4b7534e1edf736c237a20e009b50a64a78928e2fd583b9ba86400b6452" gracePeriod=15 Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.763618 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://57d66dcb61882a772119bb25e7d1b4b49ea5f91bdbc8a2daf7f3301b0eeb4d3e" gracePeriod=15 Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.764185 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.764242 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.764259 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.764310 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.764332 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.764347 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.764402 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.763686 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://b7376664c839f44962e7ba0e53f1895d1e99866d768aa568d31816e23f3959b1" gracePeriod=15 Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.764413 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.764900 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.764929 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.764946 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.764959 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.765007 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.765020 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.765685 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.765711 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.765731 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.765741 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.765919 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.765933 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.765943 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.765953 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.765964 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.765973 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.765988 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.766005 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.766130 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.766140 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.766313 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.800099 5099 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.813846 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.943515 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.943600 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.943658 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.943682 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.944057 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.944192 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.944229 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.944252 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.944288 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:17:27 crc kubenswrapper[5099]: I0122 14:17:27.944348 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 14:17:28 crc kubenswrapper[5099]: I0122 14:17:28.047955 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 14:17:28 crc kubenswrapper[5099]: I0122 14:17:28.048036 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 14:17:28 crc kubenswrapper[5099]: I0122 14:17:28.048054 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:17:28 crc kubenswrapper[5099]: I0122 14:17:28.048087 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:17:28 crc kubenswrapper[5099]: I0122 14:17:28.048107 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:17:28 crc kubenswrapper[5099]: I0122 14:17:28.048110 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 14:17:28 crc kubenswrapper[5099]: I0122 14:17:28.048154 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 14:17:28 crc kubenswrapper[5099]: I0122 14:17:28.048194 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:17:28 crc kubenswrapper[5099]: I0122 14:17:28.048244 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 14:17:28 crc kubenswrapper[5099]: I0122 14:17:28.048266 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 14:17:28 crc kubenswrapper[5099]: I0122 14:17:28.048353 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 14:17:28 crc kubenswrapper[5099]: I0122 14:17:28.048403 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 14:17:28 crc kubenswrapper[5099]: I0122 14:17:28.048439 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 14:17:28 crc kubenswrapper[5099]: I0122 14:17:28.048471 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:17:28 crc kubenswrapper[5099]: I0122 14:17:28.048504 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:17:28 crc kubenswrapper[5099]: I0122 14:17:28.048530 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 14:17:28 crc kubenswrapper[5099]: I0122 14:17:28.048577 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:17:28 crc kubenswrapper[5099]: I0122 14:17:28.048607 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:17:28 crc kubenswrapper[5099]: I0122 14:17:28.048633 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:17:28 crc kubenswrapper[5099]: I0122 14:17:28.048660 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:17:28 crc kubenswrapper[5099]: I0122 14:17:28.113697 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 14:17:28 crc kubenswrapper[5099]: E0122 14:17:28.137119 5099 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.163:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d134c8d5001ce openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:17:28.135745998 +0000 UTC m=+205.843496235,LastTimestamp:2026-01-22 14:17:28.135745998 +0000 UTC m=+205.843496235,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:17:28 crc kubenswrapper[5099]: I0122 14:17:28.395878 5099 generic.go:358] "Generic (PLEG): container finished" podID="79c94e89-2eb8-43af-9059-83ee27755a7d" containerID="376978eb6341a691bcdd68c2cd7bab25071d47f7ca27032d86718e59bb70516b" exitCode=0 Jan 22 14:17:28 crc kubenswrapper[5099]: I0122 14:17:28.396067 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"79c94e89-2eb8-43af-9059-83ee27755a7d","Type":"ContainerDied","Data":"376978eb6341a691bcdd68c2cd7bab25071d47f7ca27032d86718e59bb70516b"} Jan 22 14:17:28 crc kubenswrapper[5099]: I0122 14:17:28.397639 5099 status_manager.go:895] "Failed to get status for pod" podUID="79c94e89-2eb8-43af-9059-83ee27755a7d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:28 crc kubenswrapper[5099]: I0122 14:17:28.398086 5099 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:28 crc kubenswrapper[5099]: I0122 14:17:28.398229 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"fd120c0321837e458748c6547987a0c204c32cc64aaebde9715ac99a2eda1ebd"} Jan 22 14:17:28 crc kubenswrapper[5099]: I0122 14:17:28.398293 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"090dcadf337420b9543c9350762a09554a3918b7ca1fd4bedb3edb3eadc8cde0"} Jan 22 14:17:28 crc kubenswrapper[5099]: I0122 14:17:28.398466 5099 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:28 crc kubenswrapper[5099]: I0122 14:17:28.398825 5099 status_manager.go:895] "Failed to get status for pod" podUID="79c94e89-2eb8-43af-9059-83ee27755a7d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:28 crc kubenswrapper[5099]: I0122 14:17:28.399037 5099 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:28 crc kubenswrapper[5099]: I0122 14:17:28.399361 5099 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:28 crc kubenswrapper[5099]: I0122 14:17:28.400551 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 22 14:17:28 crc kubenswrapper[5099]: I0122 14:17:28.402083 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 22 14:17:28 crc kubenswrapper[5099]: I0122 14:17:28.403144 5099 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="d24c1ca52fc7eb4263fd15ac005d5a88732f554fa7e633ae037611088c865751" exitCode=0 Jan 22 14:17:28 crc kubenswrapper[5099]: I0122 14:17:28.403223 5099 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="d9b88a4b7534e1edf736c237a20e009b50a64a78928e2fd583b9ba86400b6452" exitCode=0 Jan 22 14:17:28 crc kubenswrapper[5099]: I0122 14:17:28.403242 5099 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="b7376664c839f44962e7ba0e53f1895d1e99866d768aa568d31816e23f3959b1" exitCode=0 Jan 22 14:17:28 crc kubenswrapper[5099]: I0122 14:17:28.403258 5099 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="57d66dcb61882a772119bb25e7d1b4b49ea5f91bdbc8a2daf7f3301b0eeb4d3e" exitCode=2 Jan 22 14:17:28 crc kubenswrapper[5099]: I0122 14:17:28.403258 5099 scope.go:117] "RemoveContainer" containerID="563099c8fdc6fc3a36ff525a462a2d830742426eacbf17781d1a891dab9018d8" Jan 22 14:17:28 crc kubenswrapper[5099]: E0122 14:17:28.923440 5099 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.163:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d134c8d5001ce openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:17:28.135745998 +0000 UTC m=+205.843496235,LastTimestamp:2026-01-22 14:17:28.135745998 +0000 UTC m=+205.843496235,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:17:29 crc kubenswrapper[5099]: I0122 14:17:29.412469 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 22 14:17:29 crc kubenswrapper[5099]: I0122 14:17:29.758732 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 22 14:17:29 crc kubenswrapper[5099]: I0122 14:17:29.759754 5099 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:29 crc kubenswrapper[5099]: I0122 14:17:29.759952 5099 status_manager.go:895] "Failed to get status for pod" podUID="79c94e89-2eb8-43af-9059-83ee27755a7d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:29 crc kubenswrapper[5099]: I0122 14:17:29.769384 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79c94e89-2eb8-43af-9059-83ee27755a7d-kube-api-access\") pod \"79c94e89-2eb8-43af-9059-83ee27755a7d\" (UID: \"79c94e89-2eb8-43af-9059-83ee27755a7d\") " Jan 22 14:17:29 crc kubenswrapper[5099]: I0122 14:17:29.769510 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/79c94e89-2eb8-43af-9059-83ee27755a7d-kubelet-dir\") pod \"79c94e89-2eb8-43af-9059-83ee27755a7d\" (UID: \"79c94e89-2eb8-43af-9059-83ee27755a7d\") " Jan 22 14:17:29 crc kubenswrapper[5099]: I0122 14:17:29.769583 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79c94e89-2eb8-43af-9059-83ee27755a7d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "79c94e89-2eb8-43af-9059-83ee27755a7d" (UID: "79c94e89-2eb8-43af-9059-83ee27755a7d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 14:17:29 crc kubenswrapper[5099]: I0122 14:17:29.769653 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/79c94e89-2eb8-43af-9059-83ee27755a7d-var-lock\") pod \"79c94e89-2eb8-43af-9059-83ee27755a7d\" (UID: \"79c94e89-2eb8-43af-9059-83ee27755a7d\") " Jan 22 14:17:29 crc kubenswrapper[5099]: I0122 14:17:29.769675 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79c94e89-2eb8-43af-9059-83ee27755a7d-var-lock" (OuterVolumeSpecName: "var-lock") pod "79c94e89-2eb8-43af-9059-83ee27755a7d" (UID: "79c94e89-2eb8-43af-9059-83ee27755a7d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 14:17:29 crc kubenswrapper[5099]: I0122 14:17:29.771846 5099 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/79c94e89-2eb8-43af-9059-83ee27755a7d-var-lock\") on node \"crc\" DevicePath \"\"" Jan 22 14:17:29 crc kubenswrapper[5099]: I0122 14:17:29.772118 5099 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/79c94e89-2eb8-43af-9059-83ee27755a7d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 14:17:29 crc kubenswrapper[5099]: I0122 14:17:29.774911 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79c94e89-2eb8-43af-9059-83ee27755a7d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "79c94e89-2eb8-43af-9059-83ee27755a7d" (UID: "79c94e89-2eb8-43af-9059-83ee27755a7d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:17:29 crc kubenswrapper[5099]: I0122 14:17:29.873285 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79c94e89-2eb8-43af-9059-83ee27755a7d-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.157812 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.159944 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.161152 5099 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.161840 5099 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.162225 5099 status_manager.go:895] "Failed to get status for pod" podUID="79c94e89-2eb8-43af-9059-83ee27755a7d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.187478 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.187659 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.187709 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.187772 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.187808 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.187829 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.187893 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.188128 5099 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.188117 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.188153 5099 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.188533 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.191666 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.290234 5099 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.290369 5099 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.290387 5099 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.425526 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.426810 5099 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="6adc2225720223e57146af518d38a71d01a5a7d3c93cb4c4b55e20bf0cebbff0" exitCode=0 Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.426930 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.427124 5099 scope.go:117] "RemoveContainer" containerID="d24c1ca52fc7eb4263fd15ac005d5a88732f554fa7e633ae037611088c865751" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.432735 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"79c94e89-2eb8-43af-9059-83ee27755a7d","Type":"ContainerDied","Data":"69194b7dbb27dbdb30a73c5a05bde40d764c78ed1bc1bba978b8371a20393661"} Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.432793 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69194b7dbb27dbdb30a73c5a05bde40d764c78ed1bc1bba978b8371a20393661" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.432978 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.450382 5099 scope.go:117] "RemoveContainer" containerID="d9b88a4b7534e1edf736c237a20e009b50a64a78928e2fd583b9ba86400b6452" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.452054 5099 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.452690 5099 status_manager.go:895] "Failed to get status for pod" podUID="79c94e89-2eb8-43af-9059-83ee27755a7d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.453474 5099 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.461283 5099 status_manager.go:895] "Failed to get status for pod" podUID="79c94e89-2eb8-43af-9059-83ee27755a7d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.462418 5099 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.462863 5099 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.470934 5099 scope.go:117] "RemoveContainer" containerID="b7376664c839f44962e7ba0e53f1895d1e99866d768aa568d31816e23f3959b1" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.488724 5099 scope.go:117] "RemoveContainer" containerID="57d66dcb61882a772119bb25e7d1b4b49ea5f91bdbc8a2daf7f3301b0eeb4d3e" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.510593 5099 scope.go:117] "RemoveContainer" containerID="6adc2225720223e57146af518d38a71d01a5a7d3c93cb4c4b55e20bf0cebbff0" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.532714 5099 scope.go:117] "RemoveContainer" containerID="516978eba8fe79e87c1d0bcd72dca39f8671f5f446bc1ef52787527b28e698dd" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.578080 5099 scope.go:117] "RemoveContainer" containerID="d24c1ca52fc7eb4263fd15ac005d5a88732f554fa7e633ae037611088c865751" Jan 22 14:17:30 crc kubenswrapper[5099]: E0122 14:17:30.578589 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d24c1ca52fc7eb4263fd15ac005d5a88732f554fa7e633ae037611088c865751\": container with ID starting with d24c1ca52fc7eb4263fd15ac005d5a88732f554fa7e633ae037611088c865751 not found: ID does not exist" containerID="d24c1ca52fc7eb4263fd15ac005d5a88732f554fa7e633ae037611088c865751" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.578659 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d24c1ca52fc7eb4263fd15ac005d5a88732f554fa7e633ae037611088c865751"} err="failed to get container status \"d24c1ca52fc7eb4263fd15ac005d5a88732f554fa7e633ae037611088c865751\": rpc error: code = NotFound desc = could not find container \"d24c1ca52fc7eb4263fd15ac005d5a88732f554fa7e633ae037611088c865751\": container with ID starting with d24c1ca52fc7eb4263fd15ac005d5a88732f554fa7e633ae037611088c865751 not found: ID does not exist" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.578702 5099 scope.go:117] "RemoveContainer" containerID="d9b88a4b7534e1edf736c237a20e009b50a64a78928e2fd583b9ba86400b6452" Jan 22 14:17:30 crc kubenswrapper[5099]: E0122 14:17:30.579507 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9b88a4b7534e1edf736c237a20e009b50a64a78928e2fd583b9ba86400b6452\": container with ID starting with d9b88a4b7534e1edf736c237a20e009b50a64a78928e2fd583b9ba86400b6452 not found: ID does not exist" containerID="d9b88a4b7534e1edf736c237a20e009b50a64a78928e2fd583b9ba86400b6452" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.579549 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9b88a4b7534e1edf736c237a20e009b50a64a78928e2fd583b9ba86400b6452"} err="failed to get container status \"d9b88a4b7534e1edf736c237a20e009b50a64a78928e2fd583b9ba86400b6452\": rpc error: code = NotFound desc = could not find container \"d9b88a4b7534e1edf736c237a20e009b50a64a78928e2fd583b9ba86400b6452\": container with ID starting with d9b88a4b7534e1edf736c237a20e009b50a64a78928e2fd583b9ba86400b6452 not found: ID does not exist" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.579583 5099 scope.go:117] "RemoveContainer" containerID="b7376664c839f44962e7ba0e53f1895d1e99866d768aa568d31816e23f3959b1" Jan 22 14:17:30 crc kubenswrapper[5099]: E0122 14:17:30.580067 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7376664c839f44962e7ba0e53f1895d1e99866d768aa568d31816e23f3959b1\": container with ID starting with b7376664c839f44962e7ba0e53f1895d1e99866d768aa568d31816e23f3959b1 not found: ID does not exist" containerID="b7376664c839f44962e7ba0e53f1895d1e99866d768aa568d31816e23f3959b1" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.580203 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7376664c839f44962e7ba0e53f1895d1e99866d768aa568d31816e23f3959b1"} err="failed to get container status \"b7376664c839f44962e7ba0e53f1895d1e99866d768aa568d31816e23f3959b1\": rpc error: code = NotFound desc = could not find container \"b7376664c839f44962e7ba0e53f1895d1e99866d768aa568d31816e23f3959b1\": container with ID starting with b7376664c839f44962e7ba0e53f1895d1e99866d768aa568d31816e23f3959b1 not found: ID does not exist" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.580266 5099 scope.go:117] "RemoveContainer" containerID="57d66dcb61882a772119bb25e7d1b4b49ea5f91bdbc8a2daf7f3301b0eeb4d3e" Jan 22 14:17:30 crc kubenswrapper[5099]: E0122 14:17:30.580762 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57d66dcb61882a772119bb25e7d1b4b49ea5f91bdbc8a2daf7f3301b0eeb4d3e\": container with ID starting with 57d66dcb61882a772119bb25e7d1b4b49ea5f91bdbc8a2daf7f3301b0eeb4d3e not found: ID does not exist" containerID="57d66dcb61882a772119bb25e7d1b4b49ea5f91bdbc8a2daf7f3301b0eeb4d3e" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.580795 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57d66dcb61882a772119bb25e7d1b4b49ea5f91bdbc8a2daf7f3301b0eeb4d3e"} err="failed to get container status \"57d66dcb61882a772119bb25e7d1b4b49ea5f91bdbc8a2daf7f3301b0eeb4d3e\": rpc error: code = NotFound desc = could not find container \"57d66dcb61882a772119bb25e7d1b4b49ea5f91bdbc8a2daf7f3301b0eeb4d3e\": container with ID starting with 57d66dcb61882a772119bb25e7d1b4b49ea5f91bdbc8a2daf7f3301b0eeb4d3e not found: ID does not exist" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.580835 5099 scope.go:117] "RemoveContainer" containerID="6adc2225720223e57146af518d38a71d01a5a7d3c93cb4c4b55e20bf0cebbff0" Jan 22 14:17:30 crc kubenswrapper[5099]: E0122 14:17:30.581319 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6adc2225720223e57146af518d38a71d01a5a7d3c93cb4c4b55e20bf0cebbff0\": container with ID starting with 6adc2225720223e57146af518d38a71d01a5a7d3c93cb4c4b55e20bf0cebbff0 not found: ID does not exist" containerID="6adc2225720223e57146af518d38a71d01a5a7d3c93cb4c4b55e20bf0cebbff0" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.581370 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6adc2225720223e57146af518d38a71d01a5a7d3c93cb4c4b55e20bf0cebbff0"} err="failed to get container status \"6adc2225720223e57146af518d38a71d01a5a7d3c93cb4c4b55e20bf0cebbff0\": rpc error: code = NotFound desc = could not find container \"6adc2225720223e57146af518d38a71d01a5a7d3c93cb4c4b55e20bf0cebbff0\": container with ID starting with 6adc2225720223e57146af518d38a71d01a5a7d3c93cb4c4b55e20bf0cebbff0 not found: ID does not exist" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.581386 5099 scope.go:117] "RemoveContainer" containerID="516978eba8fe79e87c1d0bcd72dca39f8671f5f446bc1ef52787527b28e698dd" Jan 22 14:17:30 crc kubenswrapper[5099]: E0122 14:17:30.581799 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"516978eba8fe79e87c1d0bcd72dca39f8671f5f446bc1ef52787527b28e698dd\": container with ID starting with 516978eba8fe79e87c1d0bcd72dca39f8671f5f446bc1ef52787527b28e698dd not found: ID does not exist" containerID="516978eba8fe79e87c1d0bcd72dca39f8671f5f446bc1ef52787527b28e698dd" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.581827 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"516978eba8fe79e87c1d0bcd72dca39f8671f5f446bc1ef52787527b28e698dd"} err="failed to get container status \"516978eba8fe79e87c1d0bcd72dca39f8671f5f446bc1ef52787527b28e698dd\": rpc error: code = NotFound desc = could not find container \"516978eba8fe79e87c1d0bcd72dca39f8671f5f446bc1ef52787527b28e698dd\": container with ID starting with 516978eba8fe79e87c1d0bcd72dca39f8671f5f446bc1ef52787527b28e698dd not found: ID does not exist" Jan 22 14:17:30 crc kubenswrapper[5099]: I0122 14:17:30.772566 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Jan 22 14:17:31 crc kubenswrapper[5099]: I0122 14:17:31.958122 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" podUID="7894c17b-6de7-426e-b27a-4834b7186e8f" containerName="oauth-openshift" containerID="cri-o://22cc0ab38c818316cb48bae6f0f671fff417bf465bf9a69bf87be79b48cebfab" gracePeriod=15 Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.408397 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.409118 5099 status_manager.go:895] "Failed to get status for pod" podUID="7894c17b-6de7-426e-b27a-4834b7186e8f" pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-pfh7d\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.409579 5099 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.409896 5099 status_manager.go:895] "Failed to get status for pod" podUID="79c94e89-2eb8-43af-9059-83ee27755a7d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.446398 5099 generic.go:358] "Generic (PLEG): container finished" podID="7894c17b-6de7-426e-b27a-4834b7186e8f" containerID="22cc0ab38c818316cb48bae6f0f671fff417bf465bf9a69bf87be79b48cebfab" exitCode=0 Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.446507 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.446584 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" event={"ID":"7894c17b-6de7-426e-b27a-4834b7186e8f","Type":"ContainerDied","Data":"22cc0ab38c818316cb48bae6f0f671fff417bf465bf9a69bf87be79b48cebfab"} Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.446693 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" event={"ID":"7894c17b-6de7-426e-b27a-4834b7186e8f","Type":"ContainerDied","Data":"98c20b1c3525153709b26db562ee5682f654df4b1d0cab9fbf061f3351d948c4"} Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.446721 5099 scope.go:117] "RemoveContainer" containerID="22cc0ab38c818316cb48bae6f0f671fff417bf465bf9a69bf87be79b48cebfab" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.447339 5099 status_manager.go:895] "Failed to get status for pod" podUID="7894c17b-6de7-426e-b27a-4834b7186e8f" pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-pfh7d\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.447831 5099 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.448150 5099 status_manager.go:895] "Failed to get status for pod" podUID="79c94e89-2eb8-43af-9059-83ee27755a7d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.466243 5099 scope.go:117] "RemoveContainer" containerID="22cc0ab38c818316cb48bae6f0f671fff417bf465bf9a69bf87be79b48cebfab" Jan 22 14:17:32 crc kubenswrapper[5099]: E0122 14:17:32.466705 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22cc0ab38c818316cb48bae6f0f671fff417bf465bf9a69bf87be79b48cebfab\": container with ID starting with 22cc0ab38c818316cb48bae6f0f671fff417bf465bf9a69bf87be79b48cebfab not found: ID does not exist" containerID="22cc0ab38c818316cb48bae6f0f671fff417bf465bf9a69bf87be79b48cebfab" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.466759 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22cc0ab38c818316cb48bae6f0f671fff417bf465bf9a69bf87be79b48cebfab"} err="failed to get container status \"22cc0ab38c818316cb48bae6f0f671fff417bf465bf9a69bf87be79b48cebfab\": rpc error: code = NotFound desc = could not find container \"22cc0ab38c818316cb48bae6f0f671fff417bf465bf9a69bf87be79b48cebfab\": container with ID starting with 22cc0ab38c818316cb48bae6f0f671fff417bf465bf9a69bf87be79b48cebfab not found: ID does not exist" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.524436 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-user-template-error\") pod \"7894c17b-6de7-426e-b27a-4834b7186e8f\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.524532 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-service-ca\") pod \"7894c17b-6de7-426e-b27a-4834b7186e8f\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.524557 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-cliconfig\") pod \"7894c17b-6de7-426e-b27a-4834b7186e8f\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.524624 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-session\") pod \"7894c17b-6de7-426e-b27a-4834b7186e8f\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.524675 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-trusted-ca-bundle\") pod \"7894c17b-6de7-426e-b27a-4834b7186e8f\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.524703 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7894c17b-6de7-426e-b27a-4834b7186e8f-audit-dir\") pod \"7894c17b-6de7-426e-b27a-4834b7186e8f\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.524738 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-user-template-provider-selection\") pod \"7894c17b-6de7-426e-b27a-4834b7186e8f\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.524795 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7894c17b-6de7-426e-b27a-4834b7186e8f-audit-policies\") pod \"7894c17b-6de7-426e-b27a-4834b7186e8f\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.524821 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-ocp-branding-template\") pod \"7894c17b-6de7-426e-b27a-4834b7186e8f\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.524874 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xt77j\" (UniqueName: \"kubernetes.io/projected/7894c17b-6de7-426e-b27a-4834b7186e8f-kube-api-access-xt77j\") pod \"7894c17b-6de7-426e-b27a-4834b7186e8f\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.524898 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-router-certs\") pod \"7894c17b-6de7-426e-b27a-4834b7186e8f\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.524946 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-serving-cert\") pod \"7894c17b-6de7-426e-b27a-4834b7186e8f\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.524976 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-user-template-login\") pod \"7894c17b-6de7-426e-b27a-4834b7186e8f\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.525018 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-user-idp-0-file-data\") pod \"7894c17b-6de7-426e-b27a-4834b7186e8f\" (UID: \"7894c17b-6de7-426e-b27a-4834b7186e8f\") " Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.525448 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7894c17b-6de7-426e-b27a-4834b7186e8f-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "7894c17b-6de7-426e-b27a-4834b7186e8f" (UID: "7894c17b-6de7-426e-b27a-4834b7186e8f"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.526148 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7894c17b-6de7-426e-b27a-4834b7186e8f-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "7894c17b-6de7-426e-b27a-4834b7186e8f" (UID: "7894c17b-6de7-426e-b27a-4834b7186e8f"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.526936 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "7894c17b-6de7-426e-b27a-4834b7186e8f" (UID: "7894c17b-6de7-426e-b27a-4834b7186e8f"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.526967 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "7894c17b-6de7-426e-b27a-4834b7186e8f" (UID: "7894c17b-6de7-426e-b27a-4834b7186e8f"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.527397 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "7894c17b-6de7-426e-b27a-4834b7186e8f" (UID: "7894c17b-6de7-426e-b27a-4834b7186e8f"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.531622 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "7894c17b-6de7-426e-b27a-4834b7186e8f" (UID: "7894c17b-6de7-426e-b27a-4834b7186e8f"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.531870 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "7894c17b-6de7-426e-b27a-4834b7186e8f" (UID: "7894c17b-6de7-426e-b27a-4834b7186e8f"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.532410 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "7894c17b-6de7-426e-b27a-4834b7186e8f" (UID: "7894c17b-6de7-426e-b27a-4834b7186e8f"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.533180 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "7894c17b-6de7-426e-b27a-4834b7186e8f" (UID: "7894c17b-6de7-426e-b27a-4834b7186e8f"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.533532 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "7894c17b-6de7-426e-b27a-4834b7186e8f" (UID: "7894c17b-6de7-426e-b27a-4834b7186e8f"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.534851 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "7894c17b-6de7-426e-b27a-4834b7186e8f" (UID: "7894c17b-6de7-426e-b27a-4834b7186e8f"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.535067 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "7894c17b-6de7-426e-b27a-4834b7186e8f" (UID: "7894c17b-6de7-426e-b27a-4834b7186e8f"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.541596 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7894c17b-6de7-426e-b27a-4834b7186e8f-kube-api-access-xt77j" (OuterVolumeSpecName: "kube-api-access-xt77j") pod "7894c17b-6de7-426e-b27a-4834b7186e8f" (UID: "7894c17b-6de7-426e-b27a-4834b7186e8f"). InnerVolumeSpecName "kube-api-access-xt77j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.542580 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "7894c17b-6de7-426e-b27a-4834b7186e8f" (UID: "7894c17b-6de7-426e-b27a-4834b7186e8f"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.626314 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xt77j\" (UniqueName: \"kubernetes.io/projected/7894c17b-6de7-426e-b27a-4834b7186e8f-kube-api-access-xt77j\") on node \"crc\" DevicePath \"\"" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.626350 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.626364 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.626374 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.626384 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.626395 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.626408 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.626421 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.626433 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.626445 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.626456 5099 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7894c17b-6de7-426e-b27a-4834b7186e8f-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.626465 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.626475 5099 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7894c17b-6de7-426e-b27a-4834b7186e8f-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.626485 5099 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7894c17b-6de7-426e-b27a-4834b7186e8f-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.769096 5099 status_manager.go:895] "Failed to get status for pod" podUID="79c94e89-2eb8-43af-9059-83ee27755a7d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.769546 5099 status_manager.go:895] "Failed to get status for pod" podUID="7894c17b-6de7-426e-b27a-4834b7186e8f" pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-pfh7d\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.769807 5099 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.781557 5099 status_manager.go:895] "Failed to get status for pod" podUID="79c94e89-2eb8-43af-9059-83ee27755a7d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.782025 5099 status_manager.go:895] "Failed to get status for pod" podUID="7894c17b-6de7-426e-b27a-4834b7186e8f" pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-pfh7d\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:32 crc kubenswrapper[5099]: I0122 14:17:32.782468 5099 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:37 crc kubenswrapper[5099]: E0122 14:17:37.523762 5099 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:37 crc kubenswrapper[5099]: E0122 14:17:37.524609 5099 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:37 crc kubenswrapper[5099]: E0122 14:17:37.524933 5099 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:37 crc kubenswrapper[5099]: E0122 14:17:37.525243 5099 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:37 crc kubenswrapper[5099]: E0122 14:17:37.525570 5099 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:37 crc kubenswrapper[5099]: I0122 14:17:37.525596 5099 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 22 14:17:37 crc kubenswrapper[5099]: E0122 14:17:37.525879 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="200ms" Jan 22 14:17:37 crc kubenswrapper[5099]: E0122 14:17:37.727557 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="400ms" Jan 22 14:17:38 crc kubenswrapper[5099]: E0122 14:17:38.129257 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="800ms" Jan 22 14:17:38 crc kubenswrapper[5099]: E0122 14:17:38.924661 5099 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.163:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d134c8d5001ce openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:17:28.135745998 +0000 UTC m=+205.843496235,LastTimestamp:2026-01-22 14:17:28.135745998 +0000 UTC m=+205.843496235,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:17:38 crc kubenswrapper[5099]: E0122 14:17:38.929662 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="1.6s" Jan 22 14:17:40 crc kubenswrapper[5099]: I0122 14:17:40.115759 5099 patch_prober.go:28] interesting pod/machine-config-daemon-88wst container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 14:17:40 crc kubenswrapper[5099]: I0122 14:17:40.115845 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-88wst" podUID="4620190f-fea2-4e88-8a94-8e1bd1e1db12" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 14:17:40 crc kubenswrapper[5099]: E0122 14:17:40.531083 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="3.2s" Jan 22 14:17:42 crc kubenswrapper[5099]: I0122 14:17:42.500456 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 14:17:42 crc kubenswrapper[5099]: I0122 14:17:42.500705 5099 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="acd69d42784e648b84b99c1cced0501b3fc34e0e8d0fa85436cc27014ab88d80" exitCode=1 Jan 22 14:17:42 crc kubenswrapper[5099]: I0122 14:17:42.500888 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"acd69d42784e648b84b99c1cced0501b3fc34e0e8d0fa85436cc27014ab88d80"} Jan 22 14:17:42 crc kubenswrapper[5099]: I0122 14:17:42.501886 5099 scope.go:117] "RemoveContainer" containerID="acd69d42784e648b84b99c1cced0501b3fc34e0e8d0fa85436cc27014ab88d80" Jan 22 14:17:42 crc kubenswrapper[5099]: I0122 14:17:42.502431 5099 status_manager.go:895] "Failed to get status for pod" podUID="7894c17b-6de7-426e-b27a-4834b7186e8f" pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-pfh7d\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:42 crc kubenswrapper[5099]: I0122 14:17:42.503092 5099 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:42 crc kubenswrapper[5099]: I0122 14:17:42.503595 5099 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:42 crc kubenswrapper[5099]: I0122 14:17:42.503921 5099 status_manager.go:895] "Failed to get status for pod" podUID="79c94e89-2eb8-43af-9059-83ee27755a7d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:42 crc kubenswrapper[5099]: I0122 14:17:42.765104 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:17:42 crc kubenswrapper[5099]: I0122 14:17:42.765570 5099 status_manager.go:895] "Failed to get status for pod" podUID="79c94e89-2eb8-43af-9059-83ee27755a7d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:42 crc kubenswrapper[5099]: I0122 14:17:42.766298 5099 status_manager.go:895] "Failed to get status for pod" podUID="7894c17b-6de7-426e-b27a-4834b7186e8f" pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-pfh7d\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:42 crc kubenswrapper[5099]: I0122 14:17:42.766870 5099 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:42 crc kubenswrapper[5099]: I0122 14:17:42.767209 5099 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:42 crc kubenswrapper[5099]: I0122 14:17:42.767552 5099 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:42 crc kubenswrapper[5099]: I0122 14:17:42.767795 5099 status_manager.go:895] "Failed to get status for pod" podUID="79c94e89-2eb8-43af-9059-83ee27755a7d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:42 crc kubenswrapper[5099]: I0122 14:17:42.768057 5099 status_manager.go:895] "Failed to get status for pod" podUID="7894c17b-6de7-426e-b27a-4834b7186e8f" pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-pfh7d\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:42 crc kubenswrapper[5099]: I0122 14:17:42.768392 5099 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:42 crc kubenswrapper[5099]: I0122 14:17:42.781479 5099 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="66a6f8fc-92aa-40d8-b0ac-f8b0034c5e36" Jan 22 14:17:42 crc kubenswrapper[5099]: I0122 14:17:42.781510 5099 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="66a6f8fc-92aa-40d8-b0ac-f8b0034c5e36" Jan 22 14:17:42 crc kubenswrapper[5099]: E0122 14:17:42.782054 5099 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:17:42 crc kubenswrapper[5099]: I0122 14:17:42.782450 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:17:43 crc kubenswrapper[5099]: I0122 14:17:43.511801 5099 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="bb5e3e70190a9dc57e8eb44a33256a3760bee24612dcda42f7b7e34b7a323d31" exitCode=0 Jan 22 14:17:43 crc kubenswrapper[5099]: I0122 14:17:43.511927 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"bb5e3e70190a9dc57e8eb44a33256a3760bee24612dcda42f7b7e34b7a323d31"} Jan 22 14:17:43 crc kubenswrapper[5099]: I0122 14:17:43.511985 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"ac8e814ee50bf11e145aff08cbc8e48e06827bfdcd6610c0cab7aaeae0fa8286"} Jan 22 14:17:43 crc kubenswrapper[5099]: I0122 14:17:43.512378 5099 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="66a6f8fc-92aa-40d8-b0ac-f8b0034c5e36" Jan 22 14:17:43 crc kubenswrapper[5099]: I0122 14:17:43.512441 5099 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="66a6f8fc-92aa-40d8-b0ac-f8b0034c5e36" Jan 22 14:17:43 crc kubenswrapper[5099]: E0122 14:17:43.513099 5099 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:17:43 crc kubenswrapper[5099]: I0122 14:17:43.513201 5099 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:43 crc kubenswrapper[5099]: I0122 14:17:43.513438 5099 status_manager.go:895] "Failed to get status for pod" podUID="79c94e89-2eb8-43af-9059-83ee27755a7d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:43 crc kubenswrapper[5099]: I0122 14:17:43.513711 5099 status_manager.go:895] "Failed to get status for pod" podUID="7894c17b-6de7-426e-b27a-4834b7186e8f" pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-pfh7d\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:43 crc kubenswrapper[5099]: I0122 14:17:43.514188 5099 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:43 crc kubenswrapper[5099]: I0122 14:17:43.516510 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 14:17:43 crc kubenswrapper[5099]: I0122 14:17:43.516623 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"e68c881bb6bb51645d563099def68bb185004b4b3768ed25b6e45eb15683d5db"} Jan 22 14:17:43 crc kubenswrapper[5099]: I0122 14:17:43.517442 5099 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:43 crc kubenswrapper[5099]: I0122 14:17:43.517777 5099 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:43 crc kubenswrapper[5099]: I0122 14:17:43.518098 5099 status_manager.go:895] "Failed to get status for pod" podUID="79c94e89-2eb8-43af-9059-83ee27755a7d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:43 crc kubenswrapper[5099]: I0122 14:17:43.518457 5099 status_manager.go:895] "Failed to get status for pod" podUID="7894c17b-6de7-426e-b27a-4834b7186e8f" pod="openshift-authentication/oauth-openshift-66458b6674-pfh7d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-pfh7d\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 22 14:17:43 crc kubenswrapper[5099]: E0122 14:17:43.731825 5099 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="6.4s" Jan 22 14:17:43 crc kubenswrapper[5099]: I0122 14:17:43.873032 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:17:44 crc kubenswrapper[5099]: I0122 14:17:44.525336 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"844d06da5ad68941ae6942cb79ddef02fdb53420239cb8d472d8c01c7b899db2"} Jan 22 14:17:44 crc kubenswrapper[5099]: I0122 14:17:44.525746 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"b27054f2e52e6e70c61de8a89616a13def17f41144436b42548766d0902defc6"} Jan 22 14:17:44 crc kubenswrapper[5099]: I0122 14:17:44.525757 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"8c6c62c96330247108b75684c9ccfc08dd67f28594ee9d0d8893b74ce63d7beb"} Jan 22 14:17:45 crc kubenswrapper[5099]: I0122 14:17:45.533094 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"3841cede2f68d1e35b0d6ba38b7d05a0cb6b512290de9d598c74784a71d965cc"} Jan 22 14:17:45 crc kubenswrapper[5099]: I0122 14:17:45.533170 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"5d746fd63bbfbbed3a5a43accab8db22ae6e22713e08128373ba1e9739b8077b"} Jan 22 14:17:45 crc kubenswrapper[5099]: I0122 14:17:45.533228 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:17:45 crc kubenswrapper[5099]: I0122 14:17:45.533388 5099 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="66a6f8fc-92aa-40d8-b0ac-f8b0034c5e36" Jan 22 14:17:45 crc kubenswrapper[5099]: I0122 14:17:45.533413 5099 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="66a6f8fc-92aa-40d8-b0ac-f8b0034c5e36" Jan 22 14:17:47 crc kubenswrapper[5099]: I0122 14:17:47.783427 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:17:47 crc kubenswrapper[5099]: I0122 14:17:47.783507 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:17:47 crc kubenswrapper[5099]: I0122 14:17:47.790212 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:17:50 crc kubenswrapper[5099]: I0122 14:17:50.545243 5099 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:17:50 crc kubenswrapper[5099]: I0122 14:17:50.545758 5099 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:17:51 crc kubenswrapper[5099]: I0122 14:17:51.568787 5099 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="66a6f8fc-92aa-40d8-b0ac-f8b0034c5e36" Jan 22 14:17:51 crc kubenswrapper[5099]: I0122 14:17:51.568825 5099 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="66a6f8fc-92aa-40d8-b0ac-f8b0034c5e36" Jan 22 14:17:51 crc kubenswrapper[5099]: I0122 14:17:51.577569 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:17:51 crc kubenswrapper[5099]: I0122 14:17:51.754713 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:17:51 crc kubenswrapper[5099]: I0122 14:17:51.759784 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:17:52 crc kubenswrapper[5099]: I0122 14:17:52.574625 5099 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="66a6f8fc-92aa-40d8-b0ac-f8b0034c5e36" Jan 22 14:17:52 crc kubenswrapper[5099]: I0122 14:17:52.574690 5099 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="66a6f8fc-92aa-40d8-b0ac-f8b0034c5e36" Jan 22 14:17:52 crc kubenswrapper[5099]: I0122 14:17:52.584318 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:17:52 crc kubenswrapper[5099]: I0122 14:17:52.791294 5099 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="4bdb3a8e-f8ad-43cb-99c3-feef97a45b26" Jan 22 14:18:00 crc kubenswrapper[5099]: I0122 14:18:00.126332 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 22 14:18:00 crc kubenswrapper[5099]: I0122 14:18:00.475965 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 22 14:18:00 crc kubenswrapper[5099]: I0122 14:18:00.540876 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 22 14:18:00 crc kubenswrapper[5099]: I0122 14:18:00.852446 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:18:01 crc kubenswrapper[5099]: I0122 14:18:01.066740 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 22 14:18:01 crc kubenswrapper[5099]: I0122 14:18:01.628687 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 22 14:18:01 crc kubenswrapper[5099]: I0122 14:18:01.784775 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 22 14:18:01 crc kubenswrapper[5099]: I0122 14:18:01.870609 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 22 14:18:02 crc kubenswrapper[5099]: I0122 14:18:02.034443 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 22 14:18:02 crc kubenswrapper[5099]: I0122 14:18:02.139427 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:18:02 crc kubenswrapper[5099]: I0122 14:18:02.179521 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 22 14:18:02 crc kubenswrapper[5099]: I0122 14:18:02.232799 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:18:02 crc kubenswrapper[5099]: I0122 14:18:02.337778 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 22 14:18:02 crc kubenswrapper[5099]: I0122 14:18:02.367683 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 22 14:18:02 crc kubenswrapper[5099]: I0122 14:18:02.406927 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 22 14:18:02 crc kubenswrapper[5099]: I0122 14:18:02.409757 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 22 14:18:02 crc kubenswrapper[5099]: I0122 14:18:02.657545 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 22 14:18:02 crc kubenswrapper[5099]: I0122 14:18:02.670059 5099 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 22 14:18:02 crc kubenswrapper[5099]: I0122 14:18:02.905313 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 22 14:18:02 crc kubenswrapper[5099]: I0122 14:18:02.916317 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 22 14:18:02 crc kubenswrapper[5099]: I0122 14:18:02.977569 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 22 14:18:02 crc kubenswrapper[5099]: I0122 14:18:02.981526 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 22 14:18:03 crc kubenswrapper[5099]: I0122 14:18:03.052666 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:18:03 crc kubenswrapper[5099]: I0122 14:18:03.116882 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 22 14:18:03 crc kubenswrapper[5099]: I0122 14:18:03.274931 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 22 14:18:03 crc kubenswrapper[5099]: I0122 14:18:03.325675 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 22 14:18:03 crc kubenswrapper[5099]: I0122 14:18:03.350409 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 22 14:18:03 crc kubenswrapper[5099]: I0122 14:18:03.425045 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 22 14:18:03 crc kubenswrapper[5099]: I0122 14:18:03.437430 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 22 14:18:03 crc kubenswrapper[5099]: I0122 14:18:03.442939 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 22 14:18:03 crc kubenswrapper[5099]: I0122 14:18:03.505685 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 22 14:18:03 crc kubenswrapper[5099]: I0122 14:18:03.577481 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 22 14:18:03 crc kubenswrapper[5099]: I0122 14:18:03.637631 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 22 14:18:04 crc kubenswrapper[5099]: I0122 14:18:04.037506 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 22 14:18:04 crc kubenswrapper[5099]: I0122 14:18:04.083186 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 22 14:18:04 crc kubenswrapper[5099]: I0122 14:18:04.085732 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 22 14:18:04 crc kubenswrapper[5099]: I0122 14:18:04.222353 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 22 14:18:04 crc kubenswrapper[5099]: I0122 14:18:04.246006 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 22 14:18:04 crc kubenswrapper[5099]: I0122 14:18:04.274087 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 22 14:18:04 crc kubenswrapper[5099]: I0122 14:18:04.338122 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 22 14:18:04 crc kubenswrapper[5099]: I0122 14:18:04.373207 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 22 14:18:04 crc kubenswrapper[5099]: I0122 14:18:04.390001 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 22 14:18:04 crc kubenswrapper[5099]: I0122 14:18:04.405161 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 22 14:18:04 crc kubenswrapper[5099]: I0122 14:18:04.427700 5099 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 22 14:18:04 crc kubenswrapper[5099]: I0122 14:18:04.564071 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 22 14:18:04 crc kubenswrapper[5099]: I0122 14:18:04.629850 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 22 14:18:04 crc kubenswrapper[5099]: I0122 14:18:04.676479 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 22 14:18:04 crc kubenswrapper[5099]: I0122 14:18:04.703414 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 22 14:18:04 crc kubenswrapper[5099]: I0122 14:18:04.775726 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 22 14:18:04 crc kubenswrapper[5099]: I0122 14:18:04.959475 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 22 14:18:05 crc kubenswrapper[5099]: I0122 14:18:05.001070 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 22 14:18:05 crc kubenswrapper[5099]: I0122 14:18:05.159265 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 22 14:18:05 crc kubenswrapper[5099]: I0122 14:18:05.161900 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 22 14:18:05 crc kubenswrapper[5099]: I0122 14:18:05.257434 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 22 14:18:05 crc kubenswrapper[5099]: I0122 14:18:05.446292 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 22 14:18:05 crc kubenswrapper[5099]: I0122 14:18:05.495648 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 22 14:18:05 crc kubenswrapper[5099]: I0122 14:18:05.520048 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 22 14:18:05 crc kubenswrapper[5099]: I0122 14:18:05.601670 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 22 14:18:05 crc kubenswrapper[5099]: I0122 14:18:05.631587 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 22 14:18:05 crc kubenswrapper[5099]: I0122 14:18:05.666354 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 22 14:18:05 crc kubenswrapper[5099]: I0122 14:18:05.780054 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 22 14:18:05 crc kubenswrapper[5099]: I0122 14:18:05.816619 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 22 14:18:05 crc kubenswrapper[5099]: I0122 14:18:05.848685 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 22 14:18:05 crc kubenswrapper[5099]: I0122 14:18:05.870704 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 22 14:18:05 crc kubenswrapper[5099]: I0122 14:18:05.873610 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:18:05 crc kubenswrapper[5099]: I0122 14:18:05.963403 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 22 14:18:06 crc kubenswrapper[5099]: I0122 14:18:06.069294 5099 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 22 14:18:06 crc kubenswrapper[5099]: I0122 14:18:06.105286 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 22 14:18:06 crc kubenswrapper[5099]: I0122 14:18:06.112908 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 22 14:18:06 crc kubenswrapper[5099]: I0122 14:18:06.169285 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 22 14:18:06 crc kubenswrapper[5099]: I0122 14:18:06.198760 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:18:06 crc kubenswrapper[5099]: I0122 14:18:06.205582 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 22 14:18:06 crc kubenswrapper[5099]: I0122 14:18:06.230854 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 22 14:18:06 crc kubenswrapper[5099]: I0122 14:18:06.261216 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 22 14:18:06 crc kubenswrapper[5099]: I0122 14:18:06.282532 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 22 14:18:06 crc kubenswrapper[5099]: I0122 14:18:06.283169 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 22 14:18:06 crc kubenswrapper[5099]: I0122 14:18:06.295196 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 22 14:18:06 crc kubenswrapper[5099]: I0122 14:18:06.331567 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 22 14:18:06 crc kubenswrapper[5099]: I0122 14:18:06.333504 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 22 14:18:06 crc kubenswrapper[5099]: I0122 14:18:06.431384 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 22 14:18:06 crc kubenswrapper[5099]: I0122 14:18:06.524620 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 22 14:18:06 crc kubenswrapper[5099]: I0122 14:18:06.658186 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 22 14:18:06 crc kubenswrapper[5099]: I0122 14:18:06.773123 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 22 14:18:06 crc kubenswrapper[5099]: I0122 14:18:06.796446 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 22 14:18:07 crc kubenswrapper[5099]: I0122 14:18:07.074137 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 22 14:18:07 crc kubenswrapper[5099]: I0122 14:18:07.109896 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 22 14:18:07 crc kubenswrapper[5099]: I0122 14:18:07.115633 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 22 14:18:07 crc kubenswrapper[5099]: I0122 14:18:07.116661 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 22 14:18:07 crc kubenswrapper[5099]: I0122 14:18:07.144216 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 22 14:18:07 crc kubenswrapper[5099]: I0122 14:18:07.233221 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 22 14:18:07 crc kubenswrapper[5099]: I0122 14:18:07.258378 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 22 14:18:07 crc kubenswrapper[5099]: I0122 14:18:07.274665 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 22 14:18:07 crc kubenswrapper[5099]: I0122 14:18:07.305257 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 22 14:18:07 crc kubenswrapper[5099]: I0122 14:18:07.411769 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 22 14:18:07 crc kubenswrapper[5099]: I0122 14:18:07.421795 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 22 14:18:07 crc kubenswrapper[5099]: I0122 14:18:07.472871 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 22 14:18:07 crc kubenswrapper[5099]: I0122 14:18:07.527000 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 22 14:18:07 crc kubenswrapper[5099]: I0122 14:18:07.584191 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 22 14:18:07 crc kubenswrapper[5099]: I0122 14:18:07.625402 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 22 14:18:07 crc kubenswrapper[5099]: I0122 14:18:07.637951 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 22 14:18:07 crc kubenswrapper[5099]: I0122 14:18:07.638724 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 22 14:18:07 crc kubenswrapper[5099]: I0122 14:18:07.693689 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 22 14:18:07 crc kubenswrapper[5099]: I0122 14:18:07.735266 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 22 14:18:07 crc kubenswrapper[5099]: I0122 14:18:07.754586 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 22 14:18:07 crc kubenswrapper[5099]: I0122 14:18:07.781269 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 22 14:18:07 crc kubenswrapper[5099]: I0122 14:18:07.800093 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 22 14:18:07 crc kubenswrapper[5099]: I0122 14:18:07.843872 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 22 14:18:08 crc kubenswrapper[5099]: I0122 14:18:08.060777 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 22 14:18:08 crc kubenswrapper[5099]: I0122 14:18:08.066875 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:18:08 crc kubenswrapper[5099]: I0122 14:18:08.207004 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 22 14:18:08 crc kubenswrapper[5099]: I0122 14:18:08.229339 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 22 14:18:08 crc kubenswrapper[5099]: I0122 14:18:08.409684 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 22 14:18:08 crc kubenswrapper[5099]: I0122 14:18:08.461922 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 22 14:18:08 crc kubenswrapper[5099]: I0122 14:18:08.465379 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 22 14:18:08 crc kubenswrapper[5099]: I0122 14:18:08.475485 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 22 14:18:08 crc kubenswrapper[5099]: I0122 14:18:08.545634 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 22 14:18:08 crc kubenswrapper[5099]: I0122 14:18:08.578507 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 22 14:18:08 crc kubenswrapper[5099]: I0122 14:18:08.580763 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 22 14:18:08 crc kubenswrapper[5099]: I0122 14:18:08.658336 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 22 14:18:08 crc kubenswrapper[5099]: I0122 14:18:08.674902 5099 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 22 14:18:08 crc kubenswrapper[5099]: I0122 14:18:08.691198 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 22 14:18:08 crc kubenswrapper[5099]: I0122 14:18:08.973646 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 22 14:18:08 crc kubenswrapper[5099]: I0122 14:18:08.979461 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 22 14:18:09 crc kubenswrapper[5099]: I0122 14:18:09.066543 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 22 14:18:09 crc kubenswrapper[5099]: I0122 14:18:09.072229 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 22 14:18:09 crc kubenswrapper[5099]: I0122 14:18:09.112482 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 22 14:18:09 crc kubenswrapper[5099]: I0122 14:18:09.154271 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 22 14:18:09 crc kubenswrapper[5099]: I0122 14:18:09.283535 5099 ???:1] "http: TLS handshake error from 192.168.126.11:57662: no serving certificate available for the kubelet" Jan 22 14:18:09 crc kubenswrapper[5099]: I0122 14:18:09.368219 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 22 14:18:09 crc kubenswrapper[5099]: I0122 14:18:09.387100 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 22 14:18:09 crc kubenswrapper[5099]: I0122 14:18:09.406463 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 22 14:18:09 crc kubenswrapper[5099]: I0122 14:18:09.466068 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 22 14:18:09 crc kubenswrapper[5099]: I0122 14:18:09.478405 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 22 14:18:09 crc kubenswrapper[5099]: I0122 14:18:09.517666 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 22 14:18:09 crc kubenswrapper[5099]: I0122 14:18:09.539897 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 22 14:18:09 crc kubenswrapper[5099]: I0122 14:18:09.667541 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 22 14:18:09 crc kubenswrapper[5099]: I0122 14:18:09.691465 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 22 14:18:09 crc kubenswrapper[5099]: I0122 14:18:09.702666 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 22 14:18:09 crc kubenswrapper[5099]: I0122 14:18:09.717523 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 22 14:18:09 crc kubenswrapper[5099]: I0122 14:18:09.845484 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 22 14:18:09 crc kubenswrapper[5099]: I0122 14:18:09.847375 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 22 14:18:09 crc kubenswrapper[5099]: I0122 14:18:09.850840 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 22 14:18:09 crc kubenswrapper[5099]: I0122 14:18:09.864223 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 22 14:18:09 crc kubenswrapper[5099]: I0122 14:18:09.884002 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 22 14:18:09 crc kubenswrapper[5099]: I0122 14:18:09.895334 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 22 14:18:09 crc kubenswrapper[5099]: I0122 14:18:09.897725 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 22 14:18:09 crc kubenswrapper[5099]: I0122 14:18:09.948350 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 22 14:18:09 crc kubenswrapper[5099]: I0122 14:18:09.993340 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 22 14:18:10 crc kubenswrapper[5099]: I0122 14:18:10.026276 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 22 14:18:10 crc kubenswrapper[5099]: I0122 14:18:10.115751 5099 patch_prober.go:28] interesting pod/machine-config-daemon-88wst container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 14:18:10 crc kubenswrapper[5099]: I0122 14:18:10.115874 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-88wst" podUID="4620190f-fea2-4e88-8a94-8e1bd1e1db12" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 14:18:10 crc kubenswrapper[5099]: I0122 14:18:10.119195 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 22 14:18:10 crc kubenswrapper[5099]: I0122 14:18:10.301593 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 22 14:18:10 crc kubenswrapper[5099]: I0122 14:18:10.313196 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 22 14:18:10 crc kubenswrapper[5099]: I0122 14:18:10.334414 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 22 14:18:10 crc kubenswrapper[5099]: I0122 14:18:10.394956 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 22 14:18:10 crc kubenswrapper[5099]: I0122 14:18:10.443293 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 22 14:18:10 crc kubenswrapper[5099]: I0122 14:18:10.468425 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 22 14:18:10 crc kubenswrapper[5099]: I0122 14:18:10.620661 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 22 14:18:10 crc kubenswrapper[5099]: I0122 14:18:10.639628 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 22 14:18:10 crc kubenswrapper[5099]: I0122 14:18:10.679434 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 22 14:18:10 crc kubenswrapper[5099]: I0122 14:18:10.704836 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 22 14:18:10 crc kubenswrapper[5099]: I0122 14:18:10.786207 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 22 14:18:10 crc kubenswrapper[5099]: I0122 14:18:10.877053 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 22 14:18:10 crc kubenswrapper[5099]: I0122 14:18:10.945821 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 22 14:18:10 crc kubenswrapper[5099]: I0122 14:18:10.969246 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.036887 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.051122 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.082867 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.089249 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.091160 5099 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.094910 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=44.094897752 podStartE2EDuration="44.094897752s" podCreationTimestamp="2026-01-22 14:17:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:50.556376354 +0000 UTC m=+228.264126591" watchObservedRunningTime="2026-01-22 14:18:11.094897752 +0000 UTC m=+248.802647989" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.095320 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-66458b6674-pfh7d"] Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.095369 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6d45c87498-g9sj4","openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.095766 5099 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="66a6f8fc-92aa-40d8-b0ac-f8b0034c5e36" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.095788 5099 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="66a6f8fc-92aa-40d8-b0ac-f8b0034c5e36" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.095862 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7894c17b-6de7-426e-b27a-4834b7186e8f" containerName="oauth-openshift" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.095875 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="7894c17b-6de7-426e-b27a-4834b7186e8f" containerName="oauth-openshift" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.095887 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="79c94e89-2eb8-43af-9059-83ee27755a7d" containerName="installer" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.095893 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="79c94e89-2eb8-43af-9059-83ee27755a7d" containerName="installer" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.095989 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="79c94e89-2eb8-43af-9059-83ee27755a7d" containerName="installer" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.096001 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="7894c17b-6de7-426e-b27a-4834b7186e8f" containerName="oauth-openshift" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.127830 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.141566 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.141591 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.144573 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.144887 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.145764 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.145882 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.146018 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.146164 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.146826 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.146979 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.147331 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.147386 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.147428 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.147342 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.150678 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.152665 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.156574 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.183461 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.183439187 podStartE2EDuration="21.183439187s" podCreationTimestamp="2026-01-22 14:17:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:18:11.179335448 +0000 UTC m=+248.887085695" watchObservedRunningTime="2026-01-22 14:18:11.183439187 +0000 UTC m=+248.891189424" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.242798 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.256312 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/83c39bd3-7f92-475b-9903-5da58b57c68a-v4-0-config-system-session\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.256363 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/83c39bd3-7f92-475b-9903-5da58b57c68a-v4-0-config-system-router-certs\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.256389 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/83c39bd3-7f92-475b-9903-5da58b57c68a-audit-policies\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.256420 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/83c39bd3-7f92-475b-9903-5da58b57c68a-v4-0-config-user-template-login\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.256443 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/83c39bd3-7f92-475b-9903-5da58b57c68a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.256535 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83c39bd3-7f92-475b-9903-5da58b57c68a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.256582 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/83c39bd3-7f92-475b-9903-5da58b57c68a-v4-0-config-system-service-ca\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.256657 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/83c39bd3-7f92-475b-9903-5da58b57c68a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.256691 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/83c39bd3-7f92-475b-9903-5da58b57c68a-v4-0-config-user-template-error\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.256715 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/83c39bd3-7f92-475b-9903-5da58b57c68a-audit-dir\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.256732 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/83c39bd3-7f92-475b-9903-5da58b57c68a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.256760 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/83c39bd3-7f92-475b-9903-5da58b57c68a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.256791 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fksz7\" (UniqueName: \"kubernetes.io/projected/83c39bd3-7f92-475b-9903-5da58b57c68a-kube-api-access-fksz7\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.256820 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/83c39bd3-7f92-475b-9903-5da58b57c68a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.358024 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/83c39bd3-7f92-475b-9903-5da58b57c68a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.358079 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fksz7\" (UniqueName: \"kubernetes.io/projected/83c39bd3-7f92-475b-9903-5da58b57c68a-kube-api-access-fksz7\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.358100 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/83c39bd3-7f92-475b-9903-5da58b57c68a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.358125 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/83c39bd3-7f92-475b-9903-5da58b57c68a-v4-0-config-system-session\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.358585 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/83c39bd3-7f92-475b-9903-5da58b57c68a-v4-0-config-system-router-certs\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.358793 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/83c39bd3-7f92-475b-9903-5da58b57c68a-audit-policies\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.358988 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/83c39bd3-7f92-475b-9903-5da58b57c68a-v4-0-config-user-template-login\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.359195 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/83c39bd3-7f92-475b-9903-5da58b57c68a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.359385 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83c39bd3-7f92-475b-9903-5da58b57c68a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.359535 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/83c39bd3-7f92-475b-9903-5da58b57c68a-v4-0-config-system-service-ca\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.359766 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/83c39bd3-7f92-475b-9903-5da58b57c68a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.359937 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/83c39bd3-7f92-475b-9903-5da58b57c68a-v4-0-config-user-template-error\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.360090 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/83c39bd3-7f92-475b-9903-5da58b57c68a-audit-dir\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.360230 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/83c39bd3-7f92-475b-9903-5da58b57c68a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.360347 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83c39bd3-7f92-475b-9903-5da58b57c68a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.359638 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/83c39bd3-7f92-475b-9903-5da58b57c68a-audit-policies\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.360891 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/83c39bd3-7f92-475b-9903-5da58b57c68a-audit-dir\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.361108 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/83c39bd3-7f92-475b-9903-5da58b57c68a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.361708 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/83c39bd3-7f92-475b-9903-5da58b57c68a-v4-0-config-system-service-ca\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.364435 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/83c39bd3-7f92-475b-9903-5da58b57c68a-v4-0-config-system-router-certs\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.365372 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/83c39bd3-7f92-475b-9903-5da58b57c68a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.366676 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/83c39bd3-7f92-475b-9903-5da58b57c68a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.366689 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/83c39bd3-7f92-475b-9903-5da58b57c68a-v4-0-config-user-template-login\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.366709 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/83c39bd3-7f92-475b-9903-5da58b57c68a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.366801 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/83c39bd3-7f92-475b-9903-5da58b57c68a-v4-0-config-system-session\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.367760 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/83c39bd3-7f92-475b-9903-5da58b57c68a-v4-0-config-user-template-error\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.373215 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/83c39bd3-7f92-475b-9903-5da58b57c68a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.375633 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fksz7\" (UniqueName: \"kubernetes.io/projected/83c39bd3-7f92-475b-9903-5da58b57c68a-kube-api-access-fksz7\") pod \"oauth-openshift-6d45c87498-g9sj4\" (UID: \"83c39bd3-7f92-475b-9903-5da58b57c68a\") " pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.433387 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.443881 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.462125 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.500547 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.542461 5099 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.635736 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6d45c87498-g9sj4"] Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.643825 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.655376 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.671229 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.687463 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.701553 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" event={"ID":"83c39bd3-7f92-475b-9903-5da58b57c68a","Type":"ContainerStarted","Data":"44e9564e1e86eae6dccaaaf6e17fcd2ab6bee24a82813886caab2515bf891c88"} Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.782449 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.795680 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.800045 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.825264 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.927763 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.965414 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 22 14:18:11 crc kubenswrapper[5099]: I0122 14:18:11.971785 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 22 14:18:12 crc kubenswrapper[5099]: I0122 14:18:12.007754 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 22 14:18:12 crc kubenswrapper[5099]: I0122 14:18:12.017190 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 22 14:18:12 crc kubenswrapper[5099]: I0122 14:18:12.150358 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 22 14:18:12 crc kubenswrapper[5099]: I0122 14:18:12.170335 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:18:12 crc kubenswrapper[5099]: I0122 14:18:12.466532 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:18:12 crc kubenswrapper[5099]: I0122 14:18:12.538959 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 22 14:18:12 crc kubenswrapper[5099]: I0122 14:18:12.550131 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 22 14:18:12 crc kubenswrapper[5099]: I0122 14:18:12.557417 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 22 14:18:12 crc kubenswrapper[5099]: I0122 14:18:12.672109 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:18:12 crc kubenswrapper[5099]: I0122 14:18:12.708885 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" event={"ID":"83c39bd3-7f92-475b-9903-5da58b57c68a","Type":"ContainerStarted","Data":"ba374a63c9361b3c36fc86f4d7f863addc7831c27dbe300d5bab6f4e246bf00d"} Jan 22 14:18:12 crc kubenswrapper[5099]: I0122 14:18:12.727599 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" podStartSLOduration=66.727584963 podStartE2EDuration="1m6.727584963s" podCreationTimestamp="2026-01-22 14:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:18:12.725912039 +0000 UTC m=+250.433662276" watchObservedRunningTime="2026-01-22 14:18:12.727584963 +0000 UTC m=+250.435335190" Jan 22 14:18:12 crc kubenswrapper[5099]: I0122 14:18:12.738525 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 22 14:18:12 crc kubenswrapper[5099]: I0122 14:18:12.767257 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7894c17b-6de7-426e-b27a-4834b7186e8f" path="/var/lib/kubelet/pods/7894c17b-6de7-426e-b27a-4834b7186e8f/volumes" Jan 22 14:18:12 crc kubenswrapper[5099]: I0122 14:18:12.813808 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 22 14:18:12 crc kubenswrapper[5099]: I0122 14:18:12.844542 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 22 14:18:12 crc kubenswrapper[5099]: I0122 14:18:12.865291 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 22 14:18:12 crc kubenswrapper[5099]: I0122 14:18:12.894298 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 22 14:18:12 crc kubenswrapper[5099]: I0122 14:18:12.904857 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 22 14:18:12 crc kubenswrapper[5099]: I0122 14:18:12.945877 5099 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 14:18:12 crc kubenswrapper[5099]: I0122 14:18:12.946141 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://fd120c0321837e458748c6547987a0c204c32cc64aaebde9715ac99a2eda1ebd" gracePeriod=5 Jan 22 14:18:12 crc kubenswrapper[5099]: I0122 14:18:12.983693 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 22 14:18:13 crc kubenswrapper[5099]: I0122 14:18:13.004953 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 22 14:18:13 crc kubenswrapper[5099]: I0122 14:18:13.028642 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 22 14:18:13 crc kubenswrapper[5099]: I0122 14:18:13.274919 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 22 14:18:13 crc kubenswrapper[5099]: I0122 14:18:13.512720 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:18:13 crc kubenswrapper[5099]: I0122 14:18:13.547949 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 22 14:18:13 crc kubenswrapper[5099]: I0122 14:18:13.636217 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 22 14:18:13 crc kubenswrapper[5099]: I0122 14:18:13.713067 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:13 crc kubenswrapper[5099]: I0122 14:18:13.719722 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6d45c87498-g9sj4" Jan 22 14:18:13 crc kubenswrapper[5099]: I0122 14:18:13.812234 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 22 14:18:13 crc kubenswrapper[5099]: I0122 14:18:13.826914 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 22 14:18:13 crc kubenswrapper[5099]: I0122 14:18:13.868509 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 22 14:18:13 crc kubenswrapper[5099]: I0122 14:18:13.946688 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 22 14:18:14 crc kubenswrapper[5099]: I0122 14:18:14.093428 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 22 14:18:14 crc kubenswrapper[5099]: I0122 14:18:14.136321 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 22 14:18:14 crc kubenswrapper[5099]: I0122 14:18:14.311217 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 22 14:18:14 crc kubenswrapper[5099]: I0122 14:18:14.584681 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 22 14:18:14 crc kubenswrapper[5099]: I0122 14:18:14.654414 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 22 14:18:14 crc kubenswrapper[5099]: I0122 14:18:14.761567 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 22 14:18:14 crc kubenswrapper[5099]: I0122 14:18:14.828816 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 22 14:18:14 crc kubenswrapper[5099]: I0122 14:18:14.847601 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 22 14:18:14 crc kubenswrapper[5099]: I0122 14:18:14.993078 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 22 14:18:14 crc kubenswrapper[5099]: I0122 14:18:14.999472 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 22 14:18:15 crc kubenswrapper[5099]: I0122 14:18:15.180330 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:18:15 crc kubenswrapper[5099]: I0122 14:18:15.449015 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 22 14:18:15 crc kubenswrapper[5099]: I0122 14:18:15.568669 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 22 14:18:15 crc kubenswrapper[5099]: I0122 14:18:15.586605 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 22 14:18:15 crc kubenswrapper[5099]: I0122 14:18:15.656329 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 22 14:18:15 crc kubenswrapper[5099]: I0122 14:18:15.825211 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 22 14:18:15 crc kubenswrapper[5099]: I0122 14:18:15.829077 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 22 14:18:15 crc kubenswrapper[5099]: I0122 14:18:15.906044 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 22 14:18:16 crc kubenswrapper[5099]: I0122 14:18:16.054463 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 22 14:18:16 crc kubenswrapper[5099]: I0122 14:18:16.237266 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 22 14:18:18 crc kubenswrapper[5099]: I0122 14:18:18.539799 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 22 14:18:18 crc kubenswrapper[5099]: I0122 14:18:18.540152 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 14:18:18 crc kubenswrapper[5099]: I0122 14:18:18.653498 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 22 14:18:18 crc kubenswrapper[5099]: I0122 14:18:18.653586 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 22 14:18:18 crc kubenswrapper[5099]: I0122 14:18:18.653628 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 22 14:18:18 crc kubenswrapper[5099]: I0122 14:18:18.653667 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 22 14:18:18 crc kubenswrapper[5099]: I0122 14:18:18.653716 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 22 14:18:18 crc kubenswrapper[5099]: I0122 14:18:18.653777 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 14:18:18 crc kubenswrapper[5099]: I0122 14:18:18.653849 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 14:18:18 crc kubenswrapper[5099]: I0122 14:18:18.654035 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 14:18:18 crc kubenswrapper[5099]: I0122 14:18:18.654186 5099 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:18 crc kubenswrapper[5099]: I0122 14:18:18.654324 5099 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:18 crc kubenswrapper[5099]: I0122 14:18:18.654336 5099 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:18 crc kubenswrapper[5099]: I0122 14:18:18.654347 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 14:18:18 crc kubenswrapper[5099]: I0122 14:18:18.667552 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 14:18:18 crc kubenswrapper[5099]: I0122 14:18:18.742990 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 22 14:18:18 crc kubenswrapper[5099]: I0122 14:18:18.743090 5099 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="fd120c0321837e458748c6547987a0c204c32cc64aaebde9715ac99a2eda1ebd" exitCode=137 Jan 22 14:18:18 crc kubenswrapper[5099]: I0122 14:18:18.743148 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 14:18:18 crc kubenswrapper[5099]: I0122 14:18:18.743175 5099 scope.go:117] "RemoveContainer" containerID="fd120c0321837e458748c6547987a0c204c32cc64aaebde9715ac99a2eda1ebd" Jan 22 14:18:18 crc kubenswrapper[5099]: I0122 14:18:18.756026 5099 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:18 crc kubenswrapper[5099]: I0122 14:18:18.756053 5099 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:18 crc kubenswrapper[5099]: I0122 14:18:18.765516 5099 scope.go:117] "RemoveContainer" containerID="fd120c0321837e458748c6547987a0c204c32cc64aaebde9715ac99a2eda1ebd" Jan 22 14:18:18 crc kubenswrapper[5099]: E0122 14:18:18.766553 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd120c0321837e458748c6547987a0c204c32cc64aaebde9715ac99a2eda1ebd\": container with ID starting with fd120c0321837e458748c6547987a0c204c32cc64aaebde9715ac99a2eda1ebd not found: ID does not exist" containerID="fd120c0321837e458748c6547987a0c204c32cc64aaebde9715ac99a2eda1ebd" Jan 22 14:18:18 crc kubenswrapper[5099]: I0122 14:18:18.766601 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd120c0321837e458748c6547987a0c204c32cc64aaebde9715ac99a2eda1ebd"} err="failed to get container status \"fd120c0321837e458748c6547987a0c204c32cc64aaebde9715ac99a2eda1ebd\": rpc error: code = NotFound desc = could not find container \"fd120c0321837e458748c6547987a0c204c32cc64aaebde9715ac99a2eda1ebd\": container with ID starting with fd120c0321837e458748c6547987a0c204c32cc64aaebde9715ac99a2eda1ebd not found: ID does not exist" Jan 22 14:18:18 crc kubenswrapper[5099]: I0122 14:18:18.768510 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Jan 22 14:18:18 crc kubenswrapper[5099]: I0122 14:18:18.769674 5099 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 22 14:18:18 crc kubenswrapper[5099]: I0122 14:18:18.783385 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 14:18:18 crc kubenswrapper[5099]: I0122 14:18:18.783417 5099 kubelet.go:2759] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="8f103b7f-194d-4c66-a67b-b27c4d38bfce" Jan 22 14:18:18 crc kubenswrapper[5099]: I0122 14:18:18.785837 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 14:18:18 crc kubenswrapper[5099]: I0122 14:18:18.785857 5099 kubelet.go:2784] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="8f103b7f-194d-4c66-a67b-b27c4d38bfce" Jan 22 14:18:28 crc kubenswrapper[5099]: I0122 14:18:28.722088 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 22 14:18:32 crc kubenswrapper[5099]: I0122 14:18:32.344565 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:18:33 crc kubenswrapper[5099]: I0122 14:18:33.899914 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 22 14:18:34 crc kubenswrapper[5099]: I0122 14:18:34.828761 5099 generic.go:358] "Generic (PLEG): container finished" podID="ad87e7e8-19c1-4c92-9400-9873a85e80b4" containerID="e0d647cdd912e67a0310f0285415a7dc9ed9f6f36a2846b251cb651ea8016a0c" exitCode=0 Jan 22 14:18:34 crc kubenswrapper[5099]: I0122 14:18:34.828884 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-9nglq" event={"ID":"ad87e7e8-19c1-4c92-9400-9873a85e80b4","Type":"ContainerDied","Data":"e0d647cdd912e67a0310f0285415a7dc9ed9f6f36a2846b251cb651ea8016a0c"} Jan 22 14:18:34 crc kubenswrapper[5099]: I0122 14:18:34.829563 5099 scope.go:117] "RemoveContainer" containerID="e0d647cdd912e67a0310f0285415a7dc9ed9f6f36a2846b251cb651ea8016a0c" Jan 22 14:18:35 crc kubenswrapper[5099]: I0122 14:18:35.835814 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-9nglq" event={"ID":"ad87e7e8-19c1-4c92-9400-9873a85e80b4","Type":"ContainerStarted","Data":"87f317a618386b949ca622d41a46e6b8feabcbcc205992cfe9c9eaf6d7d3ebde"} Jan 22 14:18:35 crc kubenswrapper[5099]: I0122 14:18:35.836634 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-9nglq" Jan 22 14:18:35 crc kubenswrapper[5099]: I0122 14:18:35.838819 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-9nglq" Jan 22 14:18:36 crc kubenswrapper[5099]: I0122 14:18:36.335770 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 22 14:18:38 crc kubenswrapper[5099]: I0122 14:18:38.834699 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.156286 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-9xszn"] Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.156986 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-9xszn" podUID="94313bd3-0b8e-452e-b3b0-c549aabb8426" containerName="controller-manager" containerID="cri-o://34047ea6e030398776cb06e1fb65480408e9951a045599fb9b6019409b68b295" gracePeriod=30 Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.172274 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-k7dg6"] Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.172603 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k7dg6" podUID="0bbff495-517c-4f7c-b0e0-797cb63884c9" containerName="route-controller-manager" containerID="cri-o://50dcfe6809b0864ac6f2bc14d1ac88044aca0099b299f9275fc8faabf4b2a43b" gracePeriod=30 Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.504080 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-9xszn" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.509398 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k7dg6" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.535099 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7d55cdf7c6-xv2cm"] Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.536070 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0bbff495-517c-4f7c-b0e0-797cb63884c9" containerName="route-controller-manager" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.536102 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bbff495-517c-4f7c-b0e0-797cb63884c9" containerName="route-controller-manager" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.536124 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="94313bd3-0b8e-452e-b3b0-c549aabb8426" containerName="controller-manager" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.536136 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="94313bd3-0b8e-452e-b3b0-c549aabb8426" containerName="controller-manager" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.536194 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.536206 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.536362 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.536390 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="0bbff495-517c-4f7c-b0e0-797cb63884c9" containerName="route-controller-manager" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.536410 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="94313bd3-0b8e-452e-b3b0-c549aabb8426" containerName="controller-manager" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.620839 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fm6s7\" (UniqueName: \"kubernetes.io/projected/94313bd3-0b8e-452e-b3b0-c549aabb8426-kube-api-access-fm6s7\") pod \"94313bd3-0b8e-452e-b3b0-c549aabb8426\" (UID: \"94313bd3-0b8e-452e-b3b0-c549aabb8426\") " Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.620942 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/94313bd3-0b8e-452e-b3b0-c549aabb8426-tmp\") pod \"94313bd3-0b8e-452e-b3b0-c549aabb8426\" (UID: \"94313bd3-0b8e-452e-b3b0-c549aabb8426\") " Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.620969 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/94313bd3-0b8e-452e-b3b0-c549aabb8426-proxy-ca-bundles\") pod \"94313bd3-0b8e-452e-b3b0-c549aabb8426\" (UID: \"94313bd3-0b8e-452e-b3b0-c549aabb8426\") " Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.621005 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94313bd3-0b8e-452e-b3b0-c549aabb8426-serving-cert\") pod \"94313bd3-0b8e-452e-b3b0-c549aabb8426\" (UID: \"94313bd3-0b8e-452e-b3b0-c549aabb8426\") " Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.621056 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94313bd3-0b8e-452e-b3b0-c549aabb8426-config\") pod \"94313bd3-0b8e-452e-b3b0-c549aabb8426\" (UID: \"94313bd3-0b8e-452e-b3b0-c549aabb8426\") " Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.621127 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq45m\" (UniqueName: \"kubernetes.io/projected/0bbff495-517c-4f7c-b0e0-797cb63884c9-kube-api-access-kq45m\") pod \"0bbff495-517c-4f7c-b0e0-797cb63884c9\" (UID: \"0bbff495-517c-4f7c-b0e0-797cb63884c9\") " Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.621198 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/94313bd3-0b8e-452e-b3b0-c549aabb8426-client-ca\") pod \"94313bd3-0b8e-452e-b3b0-c549aabb8426\" (UID: \"94313bd3-0b8e-452e-b3b0-c549aabb8426\") " Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.621231 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0bbff495-517c-4f7c-b0e0-797cb63884c9-tmp\") pod \"0bbff495-517c-4f7c-b0e0-797cb63884c9\" (UID: \"0bbff495-517c-4f7c-b0e0-797cb63884c9\") " Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.621280 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0bbff495-517c-4f7c-b0e0-797cb63884c9-client-ca\") pod \"0bbff495-517c-4f7c-b0e0-797cb63884c9\" (UID: \"0bbff495-517c-4f7c-b0e0-797cb63884c9\") " Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.621336 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bbff495-517c-4f7c-b0e0-797cb63884c9-config\") pod \"0bbff495-517c-4f7c-b0e0-797cb63884c9\" (UID: \"0bbff495-517c-4f7c-b0e0-797cb63884c9\") " Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.621365 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bbff495-517c-4f7c-b0e0-797cb63884c9-serving-cert\") pod \"0bbff495-517c-4f7c-b0e0-797cb63884c9\" (UID: \"0bbff495-517c-4f7c-b0e0-797cb63884c9\") " Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.622122 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94313bd3-0b8e-452e-b3b0-c549aabb8426-tmp" (OuterVolumeSpecName: "tmp") pod "94313bd3-0b8e-452e-b3b0-c549aabb8426" (UID: "94313bd3-0b8e-452e-b3b0-c549aabb8426"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.622330 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bbff495-517c-4f7c-b0e0-797cb63884c9-client-ca" (OuterVolumeSpecName: "client-ca") pod "0bbff495-517c-4f7c-b0e0-797cb63884c9" (UID: "0bbff495-517c-4f7c-b0e0-797cb63884c9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.622793 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94313bd3-0b8e-452e-b3b0-c549aabb8426-client-ca" (OuterVolumeSpecName: "client-ca") pod "94313bd3-0b8e-452e-b3b0-c549aabb8426" (UID: "94313bd3-0b8e-452e-b3b0-c549aabb8426"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.623231 5099 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/94313bd3-0b8e-452e-b3b0-c549aabb8426-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.623256 5099 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0bbff495-517c-4f7c-b0e0-797cb63884c9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.623268 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/94313bd3-0b8e-452e-b3b0-c549aabb8426-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.623457 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bbff495-517c-4f7c-b0e0-797cb63884c9-config" (OuterVolumeSpecName: "config") pod "0bbff495-517c-4f7c-b0e0-797cb63884c9" (UID: "0bbff495-517c-4f7c-b0e0-797cb63884c9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.623576 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94313bd3-0b8e-452e-b3b0-c549aabb8426-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "94313bd3-0b8e-452e-b3b0-c549aabb8426" (UID: "94313bd3-0b8e-452e-b3b0-c549aabb8426"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.623716 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bbff495-517c-4f7c-b0e0-797cb63884c9-tmp" (OuterVolumeSpecName: "tmp") pod "0bbff495-517c-4f7c-b0e0-797cb63884c9" (UID: "0bbff495-517c-4f7c-b0e0-797cb63884c9"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.624350 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94313bd3-0b8e-452e-b3b0-c549aabb8426-config" (OuterVolumeSpecName: "config") pod "94313bd3-0b8e-452e-b3b0-c549aabb8426" (UID: "94313bd3-0b8e-452e-b3b0-c549aabb8426"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.628901 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94313bd3-0b8e-452e-b3b0-c549aabb8426-kube-api-access-fm6s7" (OuterVolumeSpecName: "kube-api-access-fm6s7") pod "94313bd3-0b8e-452e-b3b0-c549aabb8426" (UID: "94313bd3-0b8e-452e-b3b0-c549aabb8426"). InnerVolumeSpecName "kube-api-access-fm6s7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.628936 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bbff495-517c-4f7c-b0e0-797cb63884c9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0bbff495-517c-4f7c-b0e0-797cb63884c9" (UID: "0bbff495-517c-4f7c-b0e0-797cb63884c9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.630761 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94313bd3-0b8e-452e-b3b0-c549aabb8426-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "94313bd3-0b8e-452e-b3b0-c549aabb8426" (UID: "94313bd3-0b8e-452e-b3b0-c549aabb8426"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.630873 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bbff495-517c-4f7c-b0e0-797cb63884c9-kube-api-access-kq45m" (OuterVolumeSpecName: "kube-api-access-kq45m") pod "0bbff495-517c-4f7c-b0e0-797cb63884c9" (UID: "0bbff495-517c-4f7c-b0e0-797cb63884c9"). InnerVolumeSpecName "kube-api-access-kq45m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.723967 5099 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/94313bd3-0b8e-452e-b3b0-c549aabb8426-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.724005 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94313bd3-0b8e-452e-b3b0-c549aabb8426-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.724014 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94313bd3-0b8e-452e-b3b0-c549aabb8426-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.724023 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kq45m\" (UniqueName: \"kubernetes.io/projected/0bbff495-517c-4f7c-b0e0-797cb63884c9-kube-api-access-kq45m\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.724033 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0bbff495-517c-4f7c-b0e0-797cb63884c9-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.724041 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bbff495-517c-4f7c-b0e0-797cb63884c9-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.724048 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bbff495-517c-4f7c-b0e0-797cb63884c9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.724056 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fm6s7\" (UniqueName: \"kubernetes.io/projected/94313bd3-0b8e-452e-b3b0-c549aabb8426-kube-api-access-fm6s7\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.750835 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d55cdf7c6-xv2cm"] Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.750891 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77fcddb447-x9tbw"] Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.751005 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d55cdf7c6-xv2cm" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.758299 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77fcddb447-x9tbw"] Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.758425 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77fcddb447-x9tbw" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.860296 5099 generic.go:358] "Generic (PLEG): container finished" podID="94313bd3-0b8e-452e-b3b0-c549aabb8426" containerID="34047ea6e030398776cb06e1fb65480408e9951a045599fb9b6019409b68b295" exitCode=0 Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.860385 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-9xszn" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.860388 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-9xszn" event={"ID":"94313bd3-0b8e-452e-b3b0-c549aabb8426","Type":"ContainerDied","Data":"34047ea6e030398776cb06e1fb65480408e9951a045599fb9b6019409b68b295"} Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.860510 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-9xszn" event={"ID":"94313bd3-0b8e-452e-b3b0-c549aabb8426","Type":"ContainerDied","Data":"dcc461368224f657e16cb3d574a771c54a519ee2d1dc4f4549e2c77b3542bdf1"} Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.860540 5099 scope.go:117] "RemoveContainer" containerID="34047ea6e030398776cb06e1fb65480408e9951a045599fb9b6019409b68b295" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.863882 5099 generic.go:358] "Generic (PLEG): container finished" podID="0bbff495-517c-4f7c-b0e0-797cb63884c9" containerID="50dcfe6809b0864ac6f2bc14d1ac88044aca0099b299f9275fc8faabf4b2a43b" exitCode=0 Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.863949 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k7dg6" event={"ID":"0bbff495-517c-4f7c-b0e0-797cb63884c9","Type":"ContainerDied","Data":"50dcfe6809b0864ac6f2bc14d1ac88044aca0099b299f9275fc8faabf4b2a43b"} Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.863974 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k7dg6" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.863998 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k7dg6" event={"ID":"0bbff495-517c-4f7c-b0e0-797cb63884c9","Type":"ContainerDied","Data":"33e5565d82a8a335fa9da33cfde56e7054663005192cfdfda73e852a2d313610"} Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.881214 5099 scope.go:117] "RemoveContainer" containerID="34047ea6e030398776cb06e1fb65480408e9951a045599fb9b6019409b68b295" Jan 22 14:18:39 crc kubenswrapper[5099]: E0122 14:18:39.881712 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34047ea6e030398776cb06e1fb65480408e9951a045599fb9b6019409b68b295\": container with ID starting with 34047ea6e030398776cb06e1fb65480408e9951a045599fb9b6019409b68b295 not found: ID does not exist" containerID="34047ea6e030398776cb06e1fb65480408e9951a045599fb9b6019409b68b295" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.881763 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34047ea6e030398776cb06e1fb65480408e9951a045599fb9b6019409b68b295"} err="failed to get container status \"34047ea6e030398776cb06e1fb65480408e9951a045599fb9b6019409b68b295\": rpc error: code = NotFound desc = could not find container \"34047ea6e030398776cb06e1fb65480408e9951a045599fb9b6019409b68b295\": container with ID starting with 34047ea6e030398776cb06e1fb65480408e9951a045599fb9b6019409b68b295 not found: ID does not exist" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.881786 5099 scope.go:117] "RemoveContainer" containerID="50dcfe6809b0864ac6f2bc14d1ac88044aca0099b299f9275fc8faabf4b2a43b" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.890244 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-9xszn"] Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.894424 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-9xszn"] Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.900996 5099 scope.go:117] "RemoveContainer" containerID="50dcfe6809b0864ac6f2bc14d1ac88044aca0099b299f9275fc8faabf4b2a43b" Jan 22 14:18:39 crc kubenswrapper[5099]: E0122 14:18:39.901450 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50dcfe6809b0864ac6f2bc14d1ac88044aca0099b299f9275fc8faabf4b2a43b\": container with ID starting with 50dcfe6809b0864ac6f2bc14d1ac88044aca0099b299f9275fc8faabf4b2a43b not found: ID does not exist" containerID="50dcfe6809b0864ac6f2bc14d1ac88044aca0099b299f9275fc8faabf4b2a43b" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.901487 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50dcfe6809b0864ac6f2bc14d1ac88044aca0099b299f9275fc8faabf4b2a43b"} err="failed to get container status \"50dcfe6809b0864ac6f2bc14d1ac88044aca0099b299f9275fc8faabf4b2a43b\": rpc error: code = NotFound desc = could not find container \"50dcfe6809b0864ac6f2bc14d1ac88044aca0099b299f9275fc8faabf4b2a43b\": container with ID starting with 50dcfe6809b0864ac6f2bc14d1ac88044aca0099b299f9275fc8faabf4b2a43b not found: ID does not exist" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.903226 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-k7dg6"] Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.906664 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-k7dg6"] Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.926337 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/28841f24-4229-4963-b6d0-25a303224f2c-tmp\") pod \"route-controller-manager-77fcddb447-x9tbw\" (UID: \"28841f24-4229-4963-b6d0-25a303224f2c\") " pod="openshift-route-controller-manager/route-controller-manager-77fcddb447-x9tbw" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.926427 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrdhk\" (UniqueName: \"kubernetes.io/projected/28841f24-4229-4963-b6d0-25a303224f2c-kube-api-access-mrdhk\") pod \"route-controller-manager-77fcddb447-x9tbw\" (UID: \"28841f24-4229-4963-b6d0-25a303224f2c\") " pod="openshift-route-controller-manager/route-controller-manager-77fcddb447-x9tbw" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.926483 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/41150885-5658-4bc2-b1e6-a576c6eba943-tmp\") pod \"controller-manager-7d55cdf7c6-xv2cm\" (UID: \"41150885-5658-4bc2-b1e6-a576c6eba943\") " pod="openshift-controller-manager/controller-manager-7d55cdf7c6-xv2cm" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.926513 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28841f24-4229-4963-b6d0-25a303224f2c-serving-cert\") pod \"route-controller-manager-77fcddb447-x9tbw\" (UID: \"28841f24-4229-4963-b6d0-25a303224f2c\") " pod="openshift-route-controller-manager/route-controller-manager-77fcddb447-x9tbw" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.926545 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28841f24-4229-4963-b6d0-25a303224f2c-config\") pod \"route-controller-manager-77fcddb447-x9tbw\" (UID: \"28841f24-4229-4963-b6d0-25a303224f2c\") " pod="openshift-route-controller-manager/route-controller-manager-77fcddb447-x9tbw" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.926666 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/41150885-5658-4bc2-b1e6-a576c6eba943-proxy-ca-bundles\") pod \"controller-manager-7d55cdf7c6-xv2cm\" (UID: \"41150885-5658-4bc2-b1e6-a576c6eba943\") " pod="openshift-controller-manager/controller-manager-7d55cdf7c6-xv2cm" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.926766 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41150885-5658-4bc2-b1e6-a576c6eba943-serving-cert\") pod \"controller-manager-7d55cdf7c6-xv2cm\" (UID: \"41150885-5658-4bc2-b1e6-a576c6eba943\") " pod="openshift-controller-manager/controller-manager-7d55cdf7c6-xv2cm" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.926804 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41150885-5658-4bc2-b1e6-a576c6eba943-config\") pod \"controller-manager-7d55cdf7c6-xv2cm\" (UID: \"41150885-5658-4bc2-b1e6-a576c6eba943\") " pod="openshift-controller-manager/controller-manager-7d55cdf7c6-xv2cm" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.926834 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsrdw\" (UniqueName: \"kubernetes.io/projected/41150885-5658-4bc2-b1e6-a576c6eba943-kube-api-access-dsrdw\") pod \"controller-manager-7d55cdf7c6-xv2cm\" (UID: \"41150885-5658-4bc2-b1e6-a576c6eba943\") " pod="openshift-controller-manager/controller-manager-7d55cdf7c6-xv2cm" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.926902 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41150885-5658-4bc2-b1e6-a576c6eba943-client-ca\") pod \"controller-manager-7d55cdf7c6-xv2cm\" (UID: \"41150885-5658-4bc2-b1e6-a576c6eba943\") " pod="openshift-controller-manager/controller-manager-7d55cdf7c6-xv2cm" Jan 22 14:18:39 crc kubenswrapper[5099]: I0122 14:18:39.926936 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28841f24-4229-4963-b6d0-25a303224f2c-client-ca\") pod \"route-controller-manager-77fcddb447-x9tbw\" (UID: \"28841f24-4229-4963-b6d0-25a303224f2c\") " pod="openshift-route-controller-manager/route-controller-manager-77fcddb447-x9tbw" Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.028442 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41150885-5658-4bc2-b1e6-a576c6eba943-client-ca\") pod \"controller-manager-7d55cdf7c6-xv2cm\" (UID: \"41150885-5658-4bc2-b1e6-a576c6eba943\") " pod="openshift-controller-manager/controller-manager-7d55cdf7c6-xv2cm" Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.028506 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28841f24-4229-4963-b6d0-25a303224f2c-client-ca\") pod \"route-controller-manager-77fcddb447-x9tbw\" (UID: \"28841f24-4229-4963-b6d0-25a303224f2c\") " pod="openshift-route-controller-manager/route-controller-manager-77fcddb447-x9tbw" Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.028549 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/28841f24-4229-4963-b6d0-25a303224f2c-tmp\") pod \"route-controller-manager-77fcddb447-x9tbw\" (UID: \"28841f24-4229-4963-b6d0-25a303224f2c\") " pod="openshift-route-controller-manager/route-controller-manager-77fcddb447-x9tbw" Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.028575 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mrdhk\" (UniqueName: \"kubernetes.io/projected/28841f24-4229-4963-b6d0-25a303224f2c-kube-api-access-mrdhk\") pod \"route-controller-manager-77fcddb447-x9tbw\" (UID: \"28841f24-4229-4963-b6d0-25a303224f2c\") " pod="openshift-route-controller-manager/route-controller-manager-77fcddb447-x9tbw" Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.028596 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/41150885-5658-4bc2-b1e6-a576c6eba943-tmp\") pod \"controller-manager-7d55cdf7c6-xv2cm\" (UID: \"41150885-5658-4bc2-b1e6-a576c6eba943\") " pod="openshift-controller-manager/controller-manager-7d55cdf7c6-xv2cm" Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.028958 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28841f24-4229-4963-b6d0-25a303224f2c-serving-cert\") pod \"route-controller-manager-77fcddb447-x9tbw\" (UID: \"28841f24-4229-4963-b6d0-25a303224f2c\") " pod="openshift-route-controller-manager/route-controller-manager-77fcddb447-x9tbw" Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.029090 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28841f24-4229-4963-b6d0-25a303224f2c-config\") pod \"route-controller-manager-77fcddb447-x9tbw\" (UID: \"28841f24-4229-4963-b6d0-25a303224f2c\") " pod="openshift-route-controller-manager/route-controller-manager-77fcddb447-x9tbw" Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.029178 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/41150885-5658-4bc2-b1e6-a576c6eba943-proxy-ca-bundles\") pod \"controller-manager-7d55cdf7c6-xv2cm\" (UID: \"41150885-5658-4bc2-b1e6-a576c6eba943\") " pod="openshift-controller-manager/controller-manager-7d55cdf7c6-xv2cm" Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.029154 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/28841f24-4229-4963-b6d0-25a303224f2c-tmp\") pod \"route-controller-manager-77fcddb447-x9tbw\" (UID: \"28841f24-4229-4963-b6d0-25a303224f2c\") " pod="openshift-route-controller-manager/route-controller-manager-77fcddb447-x9tbw" Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.029223 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41150885-5658-4bc2-b1e6-a576c6eba943-serving-cert\") pod \"controller-manager-7d55cdf7c6-xv2cm\" (UID: \"41150885-5658-4bc2-b1e6-a576c6eba943\") " pod="openshift-controller-manager/controller-manager-7d55cdf7c6-xv2cm" Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.029279 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41150885-5658-4bc2-b1e6-a576c6eba943-config\") pod \"controller-manager-7d55cdf7c6-xv2cm\" (UID: \"41150885-5658-4bc2-b1e6-a576c6eba943\") " pod="openshift-controller-manager/controller-manager-7d55cdf7c6-xv2cm" Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.029320 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsrdw\" (UniqueName: \"kubernetes.io/projected/41150885-5658-4bc2-b1e6-a576c6eba943-kube-api-access-dsrdw\") pod \"controller-manager-7d55cdf7c6-xv2cm\" (UID: \"41150885-5658-4bc2-b1e6-a576c6eba943\") " pod="openshift-controller-manager/controller-manager-7d55cdf7c6-xv2cm" Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.032672 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/41150885-5658-4bc2-b1e6-a576c6eba943-proxy-ca-bundles\") pod \"controller-manager-7d55cdf7c6-xv2cm\" (UID: \"41150885-5658-4bc2-b1e6-a576c6eba943\") " pod="openshift-controller-manager/controller-manager-7d55cdf7c6-xv2cm" Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.032672 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28841f24-4229-4963-b6d0-25a303224f2c-config\") pod \"route-controller-manager-77fcddb447-x9tbw\" (UID: \"28841f24-4229-4963-b6d0-25a303224f2c\") " pod="openshift-route-controller-manager/route-controller-manager-77fcddb447-x9tbw" Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.033085 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/41150885-5658-4bc2-b1e6-a576c6eba943-tmp\") pod \"controller-manager-7d55cdf7c6-xv2cm\" (UID: \"41150885-5658-4bc2-b1e6-a576c6eba943\") " pod="openshift-controller-manager/controller-manager-7d55cdf7c6-xv2cm" Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.033187 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41150885-5658-4bc2-b1e6-a576c6eba943-config\") pod \"controller-manager-7d55cdf7c6-xv2cm\" (UID: \"41150885-5658-4bc2-b1e6-a576c6eba943\") " pod="openshift-controller-manager/controller-manager-7d55cdf7c6-xv2cm" Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.033230 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41150885-5658-4bc2-b1e6-a576c6eba943-client-ca\") pod \"controller-manager-7d55cdf7c6-xv2cm\" (UID: \"41150885-5658-4bc2-b1e6-a576c6eba943\") " pod="openshift-controller-manager/controller-manager-7d55cdf7c6-xv2cm" Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.033396 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28841f24-4229-4963-b6d0-25a303224f2c-client-ca\") pod \"route-controller-manager-77fcddb447-x9tbw\" (UID: \"28841f24-4229-4963-b6d0-25a303224f2c\") " pod="openshift-route-controller-manager/route-controller-manager-77fcddb447-x9tbw" Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.035839 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41150885-5658-4bc2-b1e6-a576c6eba943-serving-cert\") pod \"controller-manager-7d55cdf7c6-xv2cm\" (UID: \"41150885-5658-4bc2-b1e6-a576c6eba943\") " pod="openshift-controller-manager/controller-manager-7d55cdf7c6-xv2cm" Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.035885 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28841f24-4229-4963-b6d0-25a303224f2c-serving-cert\") pod \"route-controller-manager-77fcddb447-x9tbw\" (UID: \"28841f24-4229-4963-b6d0-25a303224f2c\") " pod="openshift-route-controller-manager/route-controller-manager-77fcddb447-x9tbw" Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.050964 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrdhk\" (UniqueName: \"kubernetes.io/projected/28841f24-4229-4963-b6d0-25a303224f2c-kube-api-access-mrdhk\") pod \"route-controller-manager-77fcddb447-x9tbw\" (UID: \"28841f24-4229-4963-b6d0-25a303224f2c\") " pod="openshift-route-controller-manager/route-controller-manager-77fcddb447-x9tbw" Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.051110 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsrdw\" (UniqueName: \"kubernetes.io/projected/41150885-5658-4bc2-b1e6-a576c6eba943-kube-api-access-dsrdw\") pod \"controller-manager-7d55cdf7c6-xv2cm\" (UID: \"41150885-5658-4bc2-b1e6-a576c6eba943\") " pod="openshift-controller-manager/controller-manager-7d55cdf7c6-xv2cm" Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.072860 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d55cdf7c6-xv2cm" Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.079997 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77fcddb447-x9tbw" Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.115650 5099 patch_prober.go:28] interesting pod/machine-config-daemon-88wst container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.115729 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-88wst" podUID="4620190f-fea2-4e88-8a94-8e1bd1e1db12" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.115781 5099 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-88wst" Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.116403 5099 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3ac81f6d12ca007b4df78462924c542cd05b380336744c2369659da7b3d6d554"} pod="openshift-machine-config-operator/machine-config-daemon-88wst" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.116464 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-88wst" podUID="4620190f-fea2-4e88-8a94-8e1bd1e1db12" containerName="machine-config-daemon" containerID="cri-o://3ac81f6d12ca007b4df78462924c542cd05b380336744c2369659da7b3d6d554" gracePeriod=600 Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.123965 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.263973 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77fcddb447-x9tbw"] Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.305137 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d55cdf7c6-xv2cm"] Jan 22 14:18:40 crc kubenswrapper[5099]: W0122 14:18:40.318260 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41150885_5658_4bc2_b1e6_a576c6eba943.slice/crio-d92373a1362cdc44728bc9a082b362162e71b98b08f611c8841f9f233db71c46 WatchSource:0}: Error finding container d92373a1362cdc44728bc9a082b362162e71b98b08f611c8841f9f233db71c46: Status 404 returned error can't find the container with id d92373a1362cdc44728bc9a082b362162e71b98b08f611c8841f9f233db71c46 Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.768420 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bbff495-517c-4f7c-b0e0-797cb63884c9" path="/var/lib/kubelet/pods/0bbff495-517c-4f7c-b0e0-797cb63884c9/volumes" Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.769539 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94313bd3-0b8e-452e-b3b0-c549aabb8426" path="/var/lib/kubelet/pods/94313bd3-0b8e-452e-b3b0-c549aabb8426/volumes" Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.869990 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77fcddb447-x9tbw" event={"ID":"28841f24-4229-4963-b6d0-25a303224f2c","Type":"ContainerStarted","Data":"b17ed69fe38e6546d8bce733acdddbb344ac26dd450cf9412c6bbdb916ae0f9c"} Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.871207 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-77fcddb447-x9tbw" Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.871313 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77fcddb447-x9tbw" event={"ID":"28841f24-4229-4963-b6d0-25a303224f2c","Type":"ContainerStarted","Data":"438c09f20450339932644e2730443e0b0c02483ec926b3a95e11a883f23f30a7"} Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.876922 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d55cdf7c6-xv2cm" event={"ID":"41150885-5658-4bc2-b1e6-a576c6eba943","Type":"ContainerStarted","Data":"753459ac59c137784ac6442842c27ec7428fc3042dce5b3ac40b002c3b44cbbd"} Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.876967 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d55cdf7c6-xv2cm" event={"ID":"41150885-5658-4bc2-b1e6-a576c6eba943","Type":"ContainerStarted","Data":"d92373a1362cdc44728bc9a082b362162e71b98b08f611c8841f9f233db71c46"} Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.878944 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-7d55cdf7c6-xv2cm" Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.880909 5099 generic.go:358] "Generic (PLEG): container finished" podID="4620190f-fea2-4e88-8a94-8e1bd1e1db12" containerID="3ac81f6d12ca007b4df78462924c542cd05b380336744c2369659da7b3d6d554" exitCode=0 Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.880963 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-88wst" event={"ID":"4620190f-fea2-4e88-8a94-8e1bd1e1db12","Type":"ContainerDied","Data":"3ac81f6d12ca007b4df78462924c542cd05b380336744c2369659da7b3d6d554"} Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.880979 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-88wst" event={"ID":"4620190f-fea2-4e88-8a94-8e1bd1e1db12","Type":"ContainerStarted","Data":"aa9ae15fe4ad370e9704f2528ddefeed5df950fd647a16eace643fbf5d0953c4"} Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.890044 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-77fcddb447-x9tbw" Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.893796 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7d55cdf7c6-xv2cm" Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.911104 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-77fcddb447-x9tbw" podStartSLOduration=1.911083088 podStartE2EDuration="1.911083088s" podCreationTimestamp="2026-01-22 14:18:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:18:40.902639738 +0000 UTC m=+278.610389995" watchObservedRunningTime="2026-01-22 14:18:40.911083088 +0000 UTC m=+278.618833325" Jan 22 14:18:40 crc kubenswrapper[5099]: I0122 14:18:40.926724 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7d55cdf7c6-xv2cm" podStartSLOduration=1.9267095539999999 podStartE2EDuration="1.926709554s" podCreationTimestamp="2026-01-22 14:18:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:18:40.924684062 +0000 UTC m=+278.632434309" watchObservedRunningTime="2026-01-22 14:18:40.926709554 +0000 UTC m=+278.634459791" Jan 22 14:18:41 crc kubenswrapper[5099]: I0122 14:18:41.189181 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.056338 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7d55cdf7c6-xv2cm"] Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.057064 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7d55cdf7c6-xv2cm" podUID="41150885-5658-4bc2-b1e6-a576c6eba943" containerName="controller-manager" containerID="cri-o://753459ac59c137784ac6442842c27ec7428fc3042dce5b3ac40b002c3b44cbbd" gracePeriod=30 Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.072050 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77fcddb447-x9tbw"] Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.072312 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-77fcddb447-x9tbw" podUID="28841f24-4229-4963-b6d0-25a303224f2c" containerName="route-controller-manager" containerID="cri-o://b17ed69fe38e6546d8bce733acdddbb344ac26dd450cf9412c6bbdb916ae0f9c" gracePeriod=30 Jan 22 14:18:45 crc kubenswrapper[5099]: E0122 14:18:45.167116 5099 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28841f24_4229_4963_b6d0_25a303224f2c.slice/crio-b17ed69fe38e6546d8bce733acdddbb344ac26dd450cf9412c6bbdb916ae0f9c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28841f24_4229_4963_b6d0_25a303224f2c.slice/crio-conmon-b17ed69fe38e6546d8bce733acdddbb344ac26dd450cf9412c6bbdb916ae0f9c.scope\": RecentStats: unable to find data in memory cache]" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.455411 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77fcddb447-x9tbw" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.460870 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d55cdf7c6-xv2cm" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.491648 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6444f75685-gkrv2"] Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.496374 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="41150885-5658-4bc2-b1e6-a576c6eba943" containerName="controller-manager" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.496403 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="41150885-5658-4bc2-b1e6-a576c6eba943" containerName="controller-manager" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.496434 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="28841f24-4229-4963-b6d0-25a303224f2c" containerName="route-controller-manager" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.496444 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="28841f24-4229-4963-b6d0-25a303224f2c" containerName="route-controller-manager" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.496565 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="28841f24-4229-4963-b6d0-25a303224f2c" containerName="route-controller-manager" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.496580 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="41150885-5658-4bc2-b1e6-a576c6eba943" containerName="controller-manager" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.507829 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6444f75685-gkrv2" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.511311 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6444f75685-gkrv2"] Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.512493 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41150885-5658-4bc2-b1e6-a576c6eba943-client-ca\") pod \"41150885-5658-4bc2-b1e6-a576c6eba943\" (UID: \"41150885-5658-4bc2-b1e6-a576c6eba943\") " Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.512680 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/28841f24-4229-4963-b6d0-25a303224f2c-tmp\") pod \"28841f24-4229-4963-b6d0-25a303224f2c\" (UID: \"28841f24-4229-4963-b6d0-25a303224f2c\") " Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.512804 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41150885-5658-4bc2-b1e6-a576c6eba943-serving-cert\") pod \"41150885-5658-4bc2-b1e6-a576c6eba943\" (UID: \"41150885-5658-4bc2-b1e6-a576c6eba943\") " Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.512950 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28841f24-4229-4963-b6d0-25a303224f2c-serving-cert\") pod \"28841f24-4229-4963-b6d0-25a303224f2c\" (UID: \"28841f24-4229-4963-b6d0-25a303224f2c\") " Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.513046 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28841f24-4229-4963-b6d0-25a303224f2c-tmp" (OuterVolumeSpecName: "tmp") pod "28841f24-4229-4963-b6d0-25a303224f2c" (UID: "28841f24-4229-4963-b6d0-25a303224f2c"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.513208 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28841f24-4229-4963-b6d0-25a303224f2c-client-ca\") pod \"28841f24-4229-4963-b6d0-25a303224f2c\" (UID: \"28841f24-4229-4963-b6d0-25a303224f2c\") " Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.513363 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrdhk\" (UniqueName: \"kubernetes.io/projected/28841f24-4229-4963-b6d0-25a303224f2c-kube-api-access-mrdhk\") pod \"28841f24-4229-4963-b6d0-25a303224f2c\" (UID: \"28841f24-4229-4963-b6d0-25a303224f2c\") " Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.513495 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28841f24-4229-4963-b6d0-25a303224f2c-config\") pod \"28841f24-4229-4963-b6d0-25a303224f2c\" (UID: \"28841f24-4229-4963-b6d0-25a303224f2c\") " Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.513617 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/41150885-5658-4bc2-b1e6-a576c6eba943-tmp\") pod \"41150885-5658-4bc2-b1e6-a576c6eba943\" (UID: \"41150885-5658-4bc2-b1e6-a576c6eba943\") " Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.513703 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28841f24-4229-4963-b6d0-25a303224f2c-client-ca" (OuterVolumeSpecName: "client-ca") pod "28841f24-4229-4963-b6d0-25a303224f2c" (UID: "28841f24-4229-4963-b6d0-25a303224f2c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.513831 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/41150885-5658-4bc2-b1e6-a576c6eba943-proxy-ca-bundles\") pod \"41150885-5658-4bc2-b1e6-a576c6eba943\" (UID: \"41150885-5658-4bc2-b1e6-a576c6eba943\") " Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.514010 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsrdw\" (UniqueName: \"kubernetes.io/projected/41150885-5658-4bc2-b1e6-a576c6eba943-kube-api-access-dsrdw\") pod \"41150885-5658-4bc2-b1e6-a576c6eba943\" (UID: \"41150885-5658-4bc2-b1e6-a576c6eba943\") " Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.514134 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41150885-5658-4bc2-b1e6-a576c6eba943-config\") pod \"41150885-5658-4bc2-b1e6-a576c6eba943\" (UID: \"41150885-5658-4bc2-b1e6-a576c6eba943\") " Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.514070 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41150885-5658-4bc2-b1e6-a576c6eba943-tmp" (OuterVolumeSpecName: "tmp") pod "41150885-5658-4bc2-b1e6-a576c6eba943" (UID: "41150885-5658-4bc2-b1e6-a576c6eba943"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.514642 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41150885-5658-4bc2-b1e6-a576c6eba943-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "41150885-5658-4bc2-b1e6-a576c6eba943" (UID: "41150885-5658-4bc2-b1e6-a576c6eba943"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.514738 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41150885-5658-4bc2-b1e6-a576c6eba943-client-ca" (OuterVolumeSpecName: "client-ca") pod "41150885-5658-4bc2-b1e6-a576c6eba943" (UID: "41150885-5658-4bc2-b1e6-a576c6eba943"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.514768 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28841f24-4229-4963-b6d0-25a303224f2c-config" (OuterVolumeSpecName: "config") pod "28841f24-4229-4963-b6d0-25a303224f2c" (UID: "28841f24-4229-4963-b6d0-25a303224f2c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.514994 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41150885-5658-4bc2-b1e6-a576c6eba943-config" (OuterVolumeSpecName: "config") pod "41150885-5658-4bc2-b1e6-a576c6eba943" (UID: "41150885-5658-4bc2-b1e6-a576c6eba943"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.515174 5099 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/41150885-5658-4bc2-b1e6-a576c6eba943-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.515196 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41150885-5658-4bc2-b1e6-a576c6eba943-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.515208 5099 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41150885-5658-4bc2-b1e6-a576c6eba943-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.515217 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/28841f24-4229-4963-b6d0-25a303224f2c-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.515225 5099 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28841f24-4229-4963-b6d0-25a303224f2c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.515234 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28841f24-4229-4963-b6d0-25a303224f2c-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.515243 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/41150885-5658-4bc2-b1e6-a576c6eba943-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.525699 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41150885-5658-4bc2-b1e6-a576c6eba943-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "41150885-5658-4bc2-b1e6-a576c6eba943" (UID: "41150885-5658-4bc2-b1e6-a576c6eba943"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.533705 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41150885-5658-4bc2-b1e6-a576c6eba943-kube-api-access-dsrdw" (OuterVolumeSpecName: "kube-api-access-dsrdw") pod "41150885-5658-4bc2-b1e6-a576c6eba943" (UID: "41150885-5658-4bc2-b1e6-a576c6eba943"). InnerVolumeSpecName "kube-api-access-dsrdw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.535766 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28841f24-4229-4963-b6d0-25a303224f2c-kube-api-access-mrdhk" (OuterVolumeSpecName: "kube-api-access-mrdhk") pod "28841f24-4229-4963-b6d0-25a303224f2c" (UID: "28841f24-4229-4963-b6d0-25a303224f2c"). InnerVolumeSpecName "kube-api-access-mrdhk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.539152 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28841f24-4229-4963-b6d0-25a303224f2c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "28841f24-4229-4963-b6d0-25a303224f2c" (UID: "28841f24-4229-4963-b6d0-25a303224f2c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.540941 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-549c648458-h5xck"] Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.551366 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-549c648458-h5xck"] Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.551506 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-549c648458-h5xck" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.616731 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/492545c6-2cb5-4c78-afe9-29e92d5b70c4-proxy-ca-bundles\") pod \"controller-manager-549c648458-h5xck\" (UID: \"492545c6-2cb5-4c78-afe9-29e92d5b70c4\") " pod="openshift-controller-manager/controller-manager-549c648458-h5xck" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.616785 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/492545c6-2cb5-4c78-afe9-29e92d5b70c4-client-ca\") pod \"controller-manager-549c648458-h5xck\" (UID: \"492545c6-2cb5-4c78-afe9-29e92d5b70c4\") " pod="openshift-controller-manager/controller-manager-549c648458-h5xck" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.616808 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/492545c6-2cb5-4c78-afe9-29e92d5b70c4-config\") pod \"controller-manager-549c648458-h5xck\" (UID: \"492545c6-2cb5-4c78-afe9-29e92d5b70c4\") " pod="openshift-controller-manager/controller-manager-549c648458-h5xck" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.616831 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55858\" (UniqueName: \"kubernetes.io/projected/492545c6-2cb5-4c78-afe9-29e92d5b70c4-kube-api-access-55858\") pod \"controller-manager-549c648458-h5xck\" (UID: \"492545c6-2cb5-4c78-afe9-29e92d5b70c4\") " pod="openshift-controller-manager/controller-manager-549c648458-h5xck" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.616863 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5ws7\" (UniqueName: \"kubernetes.io/projected/16dafabd-7338-4eef-8866-7c8c005104fb-kube-api-access-q5ws7\") pod \"route-controller-manager-6444f75685-gkrv2\" (UID: \"16dafabd-7338-4eef-8866-7c8c005104fb\") " pod="openshift-route-controller-manager/route-controller-manager-6444f75685-gkrv2" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.616915 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16dafabd-7338-4eef-8866-7c8c005104fb-config\") pod \"route-controller-manager-6444f75685-gkrv2\" (UID: \"16dafabd-7338-4eef-8866-7c8c005104fb\") " pod="openshift-route-controller-manager/route-controller-manager-6444f75685-gkrv2" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.616944 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16dafabd-7338-4eef-8866-7c8c005104fb-client-ca\") pod \"route-controller-manager-6444f75685-gkrv2\" (UID: \"16dafabd-7338-4eef-8866-7c8c005104fb\") " pod="openshift-route-controller-manager/route-controller-manager-6444f75685-gkrv2" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.616976 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/492545c6-2cb5-4c78-afe9-29e92d5b70c4-tmp\") pod \"controller-manager-549c648458-h5xck\" (UID: \"492545c6-2cb5-4c78-afe9-29e92d5b70c4\") " pod="openshift-controller-manager/controller-manager-549c648458-h5xck" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.616993 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/492545c6-2cb5-4c78-afe9-29e92d5b70c4-serving-cert\") pod \"controller-manager-549c648458-h5xck\" (UID: \"492545c6-2cb5-4c78-afe9-29e92d5b70c4\") " pod="openshift-controller-manager/controller-manager-549c648458-h5xck" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.617011 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/16dafabd-7338-4eef-8866-7c8c005104fb-tmp\") pod \"route-controller-manager-6444f75685-gkrv2\" (UID: \"16dafabd-7338-4eef-8866-7c8c005104fb\") " pod="openshift-route-controller-manager/route-controller-manager-6444f75685-gkrv2" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.617030 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16dafabd-7338-4eef-8866-7c8c005104fb-serving-cert\") pod \"route-controller-manager-6444f75685-gkrv2\" (UID: \"16dafabd-7338-4eef-8866-7c8c005104fb\") " pod="openshift-route-controller-manager/route-controller-manager-6444f75685-gkrv2" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.617069 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mrdhk\" (UniqueName: \"kubernetes.io/projected/28841f24-4229-4963-b6d0-25a303224f2c-kube-api-access-mrdhk\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.617079 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dsrdw\" (UniqueName: \"kubernetes.io/projected/41150885-5658-4bc2-b1e6-a576c6eba943-kube-api-access-dsrdw\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.617088 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41150885-5658-4bc2-b1e6-a576c6eba943-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.617098 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28841f24-4229-4963-b6d0-25a303224f2c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.718374 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16dafabd-7338-4eef-8866-7c8c005104fb-client-ca\") pod \"route-controller-manager-6444f75685-gkrv2\" (UID: \"16dafabd-7338-4eef-8866-7c8c005104fb\") " pod="openshift-route-controller-manager/route-controller-manager-6444f75685-gkrv2" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.718427 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/492545c6-2cb5-4c78-afe9-29e92d5b70c4-tmp\") pod \"controller-manager-549c648458-h5xck\" (UID: \"492545c6-2cb5-4c78-afe9-29e92d5b70c4\") " pod="openshift-controller-manager/controller-manager-549c648458-h5xck" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.718444 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/492545c6-2cb5-4c78-afe9-29e92d5b70c4-serving-cert\") pod \"controller-manager-549c648458-h5xck\" (UID: \"492545c6-2cb5-4c78-afe9-29e92d5b70c4\") " pod="openshift-controller-manager/controller-manager-549c648458-h5xck" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.718461 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/16dafabd-7338-4eef-8866-7c8c005104fb-tmp\") pod \"route-controller-manager-6444f75685-gkrv2\" (UID: \"16dafabd-7338-4eef-8866-7c8c005104fb\") " pod="openshift-route-controller-manager/route-controller-manager-6444f75685-gkrv2" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.718482 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16dafabd-7338-4eef-8866-7c8c005104fb-serving-cert\") pod \"route-controller-manager-6444f75685-gkrv2\" (UID: \"16dafabd-7338-4eef-8866-7c8c005104fb\") " pod="openshift-route-controller-manager/route-controller-manager-6444f75685-gkrv2" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.718516 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/492545c6-2cb5-4c78-afe9-29e92d5b70c4-proxy-ca-bundles\") pod \"controller-manager-549c648458-h5xck\" (UID: \"492545c6-2cb5-4c78-afe9-29e92d5b70c4\") " pod="openshift-controller-manager/controller-manager-549c648458-h5xck" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.718548 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/492545c6-2cb5-4c78-afe9-29e92d5b70c4-client-ca\") pod \"controller-manager-549c648458-h5xck\" (UID: \"492545c6-2cb5-4c78-afe9-29e92d5b70c4\") " pod="openshift-controller-manager/controller-manager-549c648458-h5xck" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.718572 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/492545c6-2cb5-4c78-afe9-29e92d5b70c4-config\") pod \"controller-manager-549c648458-h5xck\" (UID: \"492545c6-2cb5-4c78-afe9-29e92d5b70c4\") " pod="openshift-controller-manager/controller-manager-549c648458-h5xck" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.718596 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-55858\" (UniqueName: \"kubernetes.io/projected/492545c6-2cb5-4c78-afe9-29e92d5b70c4-kube-api-access-55858\") pod \"controller-manager-549c648458-h5xck\" (UID: \"492545c6-2cb5-4c78-afe9-29e92d5b70c4\") " pod="openshift-controller-manager/controller-manager-549c648458-h5xck" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.718646 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q5ws7\" (UniqueName: \"kubernetes.io/projected/16dafabd-7338-4eef-8866-7c8c005104fb-kube-api-access-q5ws7\") pod \"route-controller-manager-6444f75685-gkrv2\" (UID: \"16dafabd-7338-4eef-8866-7c8c005104fb\") " pod="openshift-route-controller-manager/route-controller-manager-6444f75685-gkrv2" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.718664 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16dafabd-7338-4eef-8866-7c8c005104fb-config\") pod \"route-controller-manager-6444f75685-gkrv2\" (UID: \"16dafabd-7338-4eef-8866-7c8c005104fb\") " pod="openshift-route-controller-manager/route-controller-manager-6444f75685-gkrv2" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.719904 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16dafabd-7338-4eef-8866-7c8c005104fb-config\") pod \"route-controller-manager-6444f75685-gkrv2\" (UID: \"16dafabd-7338-4eef-8866-7c8c005104fb\") " pod="openshift-route-controller-manager/route-controller-manager-6444f75685-gkrv2" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.720542 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16dafabd-7338-4eef-8866-7c8c005104fb-client-ca\") pod \"route-controller-manager-6444f75685-gkrv2\" (UID: \"16dafabd-7338-4eef-8866-7c8c005104fb\") " pod="openshift-route-controller-manager/route-controller-manager-6444f75685-gkrv2" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.720977 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/492545c6-2cb5-4c78-afe9-29e92d5b70c4-proxy-ca-bundles\") pod \"controller-manager-549c648458-h5xck\" (UID: \"492545c6-2cb5-4c78-afe9-29e92d5b70c4\") " pod="openshift-controller-manager/controller-manager-549c648458-h5xck" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.721206 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/492545c6-2cb5-4c78-afe9-29e92d5b70c4-client-ca\") pod \"controller-manager-549c648458-h5xck\" (UID: \"492545c6-2cb5-4c78-afe9-29e92d5b70c4\") " pod="openshift-controller-manager/controller-manager-549c648458-h5xck" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.722149 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/492545c6-2cb5-4c78-afe9-29e92d5b70c4-tmp\") pod \"controller-manager-549c648458-h5xck\" (UID: \"492545c6-2cb5-4c78-afe9-29e92d5b70c4\") " pod="openshift-controller-manager/controller-manager-549c648458-h5xck" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.722248 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/16dafabd-7338-4eef-8866-7c8c005104fb-tmp\") pod \"route-controller-manager-6444f75685-gkrv2\" (UID: \"16dafabd-7338-4eef-8866-7c8c005104fb\") " pod="openshift-route-controller-manager/route-controller-manager-6444f75685-gkrv2" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.722754 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/492545c6-2cb5-4c78-afe9-29e92d5b70c4-config\") pod \"controller-manager-549c648458-h5xck\" (UID: \"492545c6-2cb5-4c78-afe9-29e92d5b70c4\") " pod="openshift-controller-manager/controller-manager-549c648458-h5xck" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.725749 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16dafabd-7338-4eef-8866-7c8c005104fb-serving-cert\") pod \"route-controller-manager-6444f75685-gkrv2\" (UID: \"16dafabd-7338-4eef-8866-7c8c005104fb\") " pod="openshift-route-controller-manager/route-controller-manager-6444f75685-gkrv2" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.726662 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/492545c6-2cb5-4c78-afe9-29e92d5b70c4-serving-cert\") pod \"controller-manager-549c648458-h5xck\" (UID: \"492545c6-2cb5-4c78-afe9-29e92d5b70c4\") " pod="openshift-controller-manager/controller-manager-549c648458-h5xck" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.751023 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-55858\" (UniqueName: \"kubernetes.io/projected/492545c6-2cb5-4c78-afe9-29e92d5b70c4-kube-api-access-55858\") pod \"controller-manager-549c648458-h5xck\" (UID: \"492545c6-2cb5-4c78-afe9-29e92d5b70c4\") " pod="openshift-controller-manager/controller-manager-549c648458-h5xck" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.758953 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5ws7\" (UniqueName: \"kubernetes.io/projected/16dafabd-7338-4eef-8866-7c8c005104fb-kube-api-access-q5ws7\") pod \"route-controller-manager-6444f75685-gkrv2\" (UID: \"16dafabd-7338-4eef-8866-7c8c005104fb\") " pod="openshift-route-controller-manager/route-controller-manager-6444f75685-gkrv2" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.834619 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6444f75685-gkrv2" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.864560 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-549c648458-h5xck" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.914185 5099 generic.go:358] "Generic (PLEG): container finished" podID="28841f24-4229-4963-b6d0-25a303224f2c" containerID="b17ed69fe38e6546d8bce733acdddbb344ac26dd450cf9412c6bbdb916ae0f9c" exitCode=0 Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.914276 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77fcddb447-x9tbw" event={"ID":"28841f24-4229-4963-b6d0-25a303224f2c","Type":"ContainerDied","Data":"b17ed69fe38e6546d8bce733acdddbb344ac26dd450cf9412c6bbdb916ae0f9c"} Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.914559 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77fcddb447-x9tbw" event={"ID":"28841f24-4229-4963-b6d0-25a303224f2c","Type":"ContainerDied","Data":"438c09f20450339932644e2730443e0b0c02483ec926b3a95e11a883f23f30a7"} Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.914625 5099 scope.go:117] "RemoveContainer" containerID="b17ed69fe38e6546d8bce733acdddbb344ac26dd450cf9412c6bbdb916ae0f9c" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.914363 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77fcddb447-x9tbw" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.922495 5099 generic.go:358] "Generic (PLEG): container finished" podID="41150885-5658-4bc2-b1e6-a576c6eba943" containerID="753459ac59c137784ac6442842c27ec7428fc3042dce5b3ac40b002c3b44cbbd" exitCode=0 Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.922571 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d55cdf7c6-xv2cm" event={"ID":"41150885-5658-4bc2-b1e6-a576c6eba943","Type":"ContainerDied","Data":"753459ac59c137784ac6442842c27ec7428fc3042dce5b3ac40b002c3b44cbbd"} Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.922595 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d55cdf7c6-xv2cm" event={"ID":"41150885-5658-4bc2-b1e6-a576c6eba943","Type":"ContainerDied","Data":"d92373a1362cdc44728bc9a082b362162e71b98b08f611c8841f9f233db71c46"} Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.922658 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d55cdf7c6-xv2cm" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.946445 5099 scope.go:117] "RemoveContainer" containerID="b17ed69fe38e6546d8bce733acdddbb344ac26dd450cf9412c6bbdb916ae0f9c" Jan 22 14:18:45 crc kubenswrapper[5099]: E0122 14:18:45.959551 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b17ed69fe38e6546d8bce733acdddbb344ac26dd450cf9412c6bbdb916ae0f9c\": container with ID starting with b17ed69fe38e6546d8bce733acdddbb344ac26dd450cf9412c6bbdb916ae0f9c not found: ID does not exist" containerID="b17ed69fe38e6546d8bce733acdddbb344ac26dd450cf9412c6bbdb916ae0f9c" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.959617 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b17ed69fe38e6546d8bce733acdddbb344ac26dd450cf9412c6bbdb916ae0f9c"} err="failed to get container status \"b17ed69fe38e6546d8bce733acdddbb344ac26dd450cf9412c6bbdb916ae0f9c\": rpc error: code = NotFound desc = could not find container \"b17ed69fe38e6546d8bce733acdddbb344ac26dd450cf9412c6bbdb916ae0f9c\": container with ID starting with b17ed69fe38e6546d8bce733acdddbb344ac26dd450cf9412c6bbdb916ae0f9c not found: ID does not exist" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.959647 5099 scope.go:117] "RemoveContainer" containerID="753459ac59c137784ac6442842c27ec7428fc3042dce5b3ac40b002c3b44cbbd" Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.980997 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77fcddb447-x9tbw"] Jan 22 14:18:45 crc kubenswrapper[5099]: I0122 14:18:45.992751 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77fcddb447-x9tbw"] Jan 22 14:18:46 crc kubenswrapper[5099]: I0122 14:18:46.032677 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7d55cdf7c6-xv2cm"] Jan 22 14:18:46 crc kubenswrapper[5099]: I0122 14:18:46.047067 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7d55cdf7c6-xv2cm"] Jan 22 14:18:46 crc kubenswrapper[5099]: I0122 14:18:46.069461 5099 scope.go:117] "RemoveContainer" containerID="753459ac59c137784ac6442842c27ec7428fc3042dce5b3ac40b002c3b44cbbd" Jan 22 14:18:46 crc kubenswrapper[5099]: E0122 14:18:46.071323 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"753459ac59c137784ac6442842c27ec7428fc3042dce5b3ac40b002c3b44cbbd\": container with ID starting with 753459ac59c137784ac6442842c27ec7428fc3042dce5b3ac40b002c3b44cbbd not found: ID does not exist" containerID="753459ac59c137784ac6442842c27ec7428fc3042dce5b3ac40b002c3b44cbbd" Jan 22 14:18:46 crc kubenswrapper[5099]: I0122 14:18:46.071367 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"753459ac59c137784ac6442842c27ec7428fc3042dce5b3ac40b002c3b44cbbd"} err="failed to get container status \"753459ac59c137784ac6442842c27ec7428fc3042dce5b3ac40b002c3b44cbbd\": rpc error: code = NotFound desc = could not find container \"753459ac59c137784ac6442842c27ec7428fc3042dce5b3ac40b002c3b44cbbd\": container with ID starting with 753459ac59c137784ac6442842c27ec7428fc3042dce5b3ac40b002c3b44cbbd not found: ID does not exist" Jan 22 14:18:46 crc kubenswrapper[5099]: I0122 14:18:46.250273 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-549c648458-h5xck"] Jan 22 14:18:46 crc kubenswrapper[5099]: I0122 14:18:46.384541 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6444f75685-gkrv2"] Jan 22 14:18:46 crc kubenswrapper[5099]: I0122 14:18:46.776301 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28841f24-4229-4963-b6d0-25a303224f2c" path="/var/lib/kubelet/pods/28841f24-4229-4963-b6d0-25a303224f2c/volumes" Jan 22 14:18:46 crc kubenswrapper[5099]: I0122 14:18:46.777434 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41150885-5658-4bc2-b1e6-a576c6eba943" path="/var/lib/kubelet/pods/41150885-5658-4bc2-b1e6-a576c6eba943/volumes" Jan 22 14:18:46 crc kubenswrapper[5099]: I0122 14:18:46.932769 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6444f75685-gkrv2" event={"ID":"16dafabd-7338-4eef-8866-7c8c005104fb","Type":"ContainerStarted","Data":"2e032c4050218aec8b280c671e35f30d8fe054f1987f92ae0bdf23deea61d46f"} Jan 22 14:18:46 crc kubenswrapper[5099]: I0122 14:18:46.932833 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6444f75685-gkrv2" event={"ID":"16dafabd-7338-4eef-8866-7c8c005104fb","Type":"ContainerStarted","Data":"e54d68d96e7cfd632296c66fa51186912904b220a3499795f7971d058e7cf4d4"} Jan 22 14:18:46 crc kubenswrapper[5099]: I0122 14:18:46.933288 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-6444f75685-gkrv2" Jan 22 14:18:46 crc kubenswrapper[5099]: I0122 14:18:46.935271 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-549c648458-h5xck" event={"ID":"492545c6-2cb5-4c78-afe9-29e92d5b70c4","Type":"ContainerStarted","Data":"c0b2ebdbae690661d46d759bc115dec97c86fe4edf5d33ab62408c9725768fa9"} Jan 22 14:18:46 crc kubenswrapper[5099]: I0122 14:18:46.935325 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-549c648458-h5xck" event={"ID":"492545c6-2cb5-4c78-afe9-29e92d5b70c4","Type":"ContainerStarted","Data":"3f9ae26c21b93c79918a2c66e8656eb1cb7a684633f10c3eab9ffb7a15db9c96"} Jan 22 14:18:46 crc kubenswrapper[5099]: I0122 14:18:46.935658 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-549c648458-h5xck" Jan 22 14:18:46 crc kubenswrapper[5099]: I0122 14:18:46.942201 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6444f75685-gkrv2" Jan 22 14:18:46 crc kubenswrapper[5099]: I0122 14:18:46.953295 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6444f75685-gkrv2" podStartSLOduration=1.953271577 podStartE2EDuration="1.953271577s" podCreationTimestamp="2026-01-22 14:18:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:18:46.95144722 +0000 UTC m=+284.659197467" watchObservedRunningTime="2026-01-22 14:18:46.953271577 +0000 UTC m=+284.661021814" Jan 22 14:18:46 crc kubenswrapper[5099]: I0122 14:18:46.972693 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-549c648458-h5xck" podStartSLOduration=1.972662742 podStartE2EDuration="1.972662742s" podCreationTimestamp="2026-01-22 14:18:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:18:46.971937363 +0000 UTC m=+284.679687610" watchObservedRunningTime="2026-01-22 14:18:46.972662742 +0000 UTC m=+284.680412979" Jan 22 14:18:47 crc kubenswrapper[5099]: I0122 14:18:47.211649 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-549c648458-h5xck" Jan 22 14:18:47 crc kubenswrapper[5099]: I0122 14:18:47.388098 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-549c648458-h5xck"] Jan 22 14:18:47 crc kubenswrapper[5099]: I0122 14:18:47.403587 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6444f75685-gkrv2"] Jan 22 14:18:47 crc kubenswrapper[5099]: I0122 14:18:47.465965 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 22 14:18:48 crc kubenswrapper[5099]: I0122 14:18:48.674404 5099 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 14:18:48 crc kubenswrapper[5099]: I0122 14:18:48.950867 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-549c648458-h5xck" podUID="492545c6-2cb5-4c78-afe9-29e92d5b70c4" containerName="controller-manager" containerID="cri-o://c0b2ebdbae690661d46d759bc115dec97c86fe4edf5d33ab62408c9725768fa9" gracePeriod=30 Jan 22 14:18:48 crc kubenswrapper[5099]: I0122 14:18:48.950763 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6444f75685-gkrv2" podUID="16dafabd-7338-4eef-8866-7c8c005104fb" containerName="route-controller-manager" containerID="cri-o://2e032c4050218aec8b280c671e35f30d8fe054f1987f92ae0bdf23deea61d46f" gracePeriod=30 Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.040816 5099 ???:1] "http: TLS handshake error from 192.168.126.11:55738: no serving certificate available for the kubelet" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.347676 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-549c648458-h5xck" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.355266 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6444f75685-gkrv2" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.382510 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-549b6648c8-gf2nm"] Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.383417 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="492545c6-2cb5-4c78-afe9-29e92d5b70c4" containerName="controller-manager" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.383447 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="492545c6-2cb5-4c78-afe9-29e92d5b70c4" containerName="controller-manager" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.383483 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="16dafabd-7338-4eef-8866-7c8c005104fb" containerName="route-controller-manager" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.383491 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="16dafabd-7338-4eef-8866-7c8c005104fb" containerName="route-controller-manager" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.383627 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="492545c6-2cb5-4c78-afe9-29e92d5b70c4" containerName="controller-manager" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.383653 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="16dafabd-7338-4eef-8866-7c8c005104fb" containerName="route-controller-manager" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.390996 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-549b6648c8-gf2nm" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.402668 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-549b6648c8-gf2nm"] Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.406295 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f87b57647-4j86r"] Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.410058 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f87b57647-4j86r" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.428524 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f87b57647-4j86r"] Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.471070 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/492545c6-2cb5-4c78-afe9-29e92d5b70c4-tmp\") pod \"492545c6-2cb5-4c78-afe9-29e92d5b70c4\" (UID: \"492545c6-2cb5-4c78-afe9-29e92d5b70c4\") " Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.471116 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16dafabd-7338-4eef-8866-7c8c005104fb-serving-cert\") pod \"16dafabd-7338-4eef-8866-7c8c005104fb\" (UID: \"16dafabd-7338-4eef-8866-7c8c005104fb\") " Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.471135 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/492545c6-2cb5-4c78-afe9-29e92d5b70c4-client-ca\") pod \"492545c6-2cb5-4c78-afe9-29e92d5b70c4\" (UID: \"492545c6-2cb5-4c78-afe9-29e92d5b70c4\") " Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.471185 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/492545c6-2cb5-4c78-afe9-29e92d5b70c4-proxy-ca-bundles\") pod \"492545c6-2cb5-4c78-afe9-29e92d5b70c4\" (UID: \"492545c6-2cb5-4c78-afe9-29e92d5b70c4\") " Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.471228 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5ws7\" (UniqueName: \"kubernetes.io/projected/16dafabd-7338-4eef-8866-7c8c005104fb-kube-api-access-q5ws7\") pod \"16dafabd-7338-4eef-8866-7c8c005104fb\" (UID: \"16dafabd-7338-4eef-8866-7c8c005104fb\") " Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.471247 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/492545c6-2cb5-4c78-afe9-29e92d5b70c4-serving-cert\") pod \"492545c6-2cb5-4c78-afe9-29e92d5b70c4\" (UID: \"492545c6-2cb5-4c78-afe9-29e92d5b70c4\") " Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.471272 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16dafabd-7338-4eef-8866-7c8c005104fb-config\") pod \"16dafabd-7338-4eef-8866-7c8c005104fb\" (UID: \"16dafabd-7338-4eef-8866-7c8c005104fb\") " Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.471319 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/492545c6-2cb5-4c78-afe9-29e92d5b70c4-config\") pod \"492545c6-2cb5-4c78-afe9-29e92d5b70c4\" (UID: \"492545c6-2cb5-4c78-afe9-29e92d5b70c4\") " Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.471359 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55858\" (UniqueName: \"kubernetes.io/projected/492545c6-2cb5-4c78-afe9-29e92d5b70c4-kube-api-access-55858\") pod \"492545c6-2cb5-4c78-afe9-29e92d5b70c4\" (UID: \"492545c6-2cb5-4c78-afe9-29e92d5b70c4\") " Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.471394 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16dafabd-7338-4eef-8866-7c8c005104fb-client-ca\") pod \"16dafabd-7338-4eef-8866-7c8c005104fb\" (UID: \"16dafabd-7338-4eef-8866-7c8c005104fb\") " Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.471415 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/16dafabd-7338-4eef-8866-7c8c005104fb-tmp\") pod \"16dafabd-7338-4eef-8866-7c8c005104fb\" (UID: \"16dafabd-7338-4eef-8866-7c8c005104fb\") " Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.471510 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bab26e39-d459-41cb-b009-09a804261374-proxy-ca-bundles\") pod \"controller-manager-549b6648c8-gf2nm\" (UID: \"bab26e39-d459-41cb-b009-09a804261374\") " pod="openshift-controller-manager/controller-manager-549b6648c8-gf2nm" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.471534 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bb7b07c4-a956-458e-abaa-06dc68b5e359-tmp\") pod \"route-controller-manager-7f87b57647-4j86r\" (UID: \"bb7b07c4-a956-458e-abaa-06dc68b5e359\") " pod="openshift-route-controller-manager/route-controller-manager-7f87b57647-4j86r" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.471559 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmc4p\" (UniqueName: \"kubernetes.io/projected/bb7b07c4-a956-458e-abaa-06dc68b5e359-kube-api-access-dmc4p\") pod \"route-controller-manager-7f87b57647-4j86r\" (UID: \"bb7b07c4-a956-458e-abaa-06dc68b5e359\") " pod="openshift-route-controller-manager/route-controller-manager-7f87b57647-4j86r" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.471585 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bab26e39-d459-41cb-b009-09a804261374-config\") pod \"controller-manager-549b6648c8-gf2nm\" (UID: \"bab26e39-d459-41cb-b009-09a804261374\") " pod="openshift-controller-manager/controller-manager-549b6648c8-gf2nm" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.471601 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bb7b07c4-a956-458e-abaa-06dc68b5e359-client-ca\") pod \"route-controller-manager-7f87b57647-4j86r\" (UID: \"bb7b07c4-a956-458e-abaa-06dc68b5e359\") " pod="openshift-route-controller-manager/route-controller-manager-7f87b57647-4j86r" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.471617 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb7b07c4-a956-458e-abaa-06dc68b5e359-serving-cert\") pod \"route-controller-manager-7f87b57647-4j86r\" (UID: \"bb7b07c4-a956-458e-abaa-06dc68b5e359\") " pod="openshift-route-controller-manager/route-controller-manager-7f87b57647-4j86r" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.471634 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb7b07c4-a956-458e-abaa-06dc68b5e359-config\") pod \"route-controller-manager-7f87b57647-4j86r\" (UID: \"bb7b07c4-a956-458e-abaa-06dc68b5e359\") " pod="openshift-route-controller-manager/route-controller-manager-7f87b57647-4j86r" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.471648 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bab26e39-d459-41cb-b009-09a804261374-client-ca\") pod \"controller-manager-549b6648c8-gf2nm\" (UID: \"bab26e39-d459-41cb-b009-09a804261374\") " pod="openshift-controller-manager/controller-manager-549b6648c8-gf2nm" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.471692 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrvzw\" (UniqueName: \"kubernetes.io/projected/bab26e39-d459-41cb-b009-09a804261374-kube-api-access-nrvzw\") pod \"controller-manager-549b6648c8-gf2nm\" (UID: \"bab26e39-d459-41cb-b009-09a804261374\") " pod="openshift-controller-manager/controller-manager-549b6648c8-gf2nm" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.471724 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bab26e39-d459-41cb-b009-09a804261374-serving-cert\") pod \"controller-manager-549b6648c8-gf2nm\" (UID: \"bab26e39-d459-41cb-b009-09a804261374\") " pod="openshift-controller-manager/controller-manager-549b6648c8-gf2nm" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.471758 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bab26e39-d459-41cb-b009-09a804261374-tmp\") pod \"controller-manager-549b6648c8-gf2nm\" (UID: \"bab26e39-d459-41cb-b009-09a804261374\") " pod="openshift-controller-manager/controller-manager-549b6648c8-gf2nm" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.472367 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16dafabd-7338-4eef-8866-7c8c005104fb-tmp" (OuterVolumeSpecName: "tmp") pod "16dafabd-7338-4eef-8866-7c8c005104fb" (UID: "16dafabd-7338-4eef-8866-7c8c005104fb"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.472791 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/492545c6-2cb5-4c78-afe9-29e92d5b70c4-tmp" (OuterVolumeSpecName: "tmp") pod "492545c6-2cb5-4c78-afe9-29e92d5b70c4" (UID: "492545c6-2cb5-4c78-afe9-29e92d5b70c4"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.472922 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16dafabd-7338-4eef-8866-7c8c005104fb-client-ca" (OuterVolumeSpecName: "client-ca") pod "16dafabd-7338-4eef-8866-7c8c005104fb" (UID: "16dafabd-7338-4eef-8866-7c8c005104fb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.472940 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16dafabd-7338-4eef-8866-7c8c005104fb-config" (OuterVolumeSpecName: "config") pod "16dafabd-7338-4eef-8866-7c8c005104fb" (UID: "16dafabd-7338-4eef-8866-7c8c005104fb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.473737 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/492545c6-2cb5-4c78-afe9-29e92d5b70c4-config" (OuterVolumeSpecName: "config") pod "492545c6-2cb5-4c78-afe9-29e92d5b70c4" (UID: "492545c6-2cb5-4c78-afe9-29e92d5b70c4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.473859 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/492545c6-2cb5-4c78-afe9-29e92d5b70c4-client-ca" (OuterVolumeSpecName: "client-ca") pod "492545c6-2cb5-4c78-afe9-29e92d5b70c4" (UID: "492545c6-2cb5-4c78-afe9-29e92d5b70c4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.473873 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/492545c6-2cb5-4c78-afe9-29e92d5b70c4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "492545c6-2cb5-4c78-afe9-29e92d5b70c4" (UID: "492545c6-2cb5-4c78-afe9-29e92d5b70c4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.478942 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16dafabd-7338-4eef-8866-7c8c005104fb-kube-api-access-q5ws7" (OuterVolumeSpecName: "kube-api-access-q5ws7") pod "16dafabd-7338-4eef-8866-7c8c005104fb" (UID: "16dafabd-7338-4eef-8866-7c8c005104fb"). InnerVolumeSpecName "kube-api-access-q5ws7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.480122 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/492545c6-2cb5-4c78-afe9-29e92d5b70c4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "492545c6-2cb5-4c78-afe9-29e92d5b70c4" (UID: "492545c6-2cb5-4c78-afe9-29e92d5b70c4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.481800 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16dafabd-7338-4eef-8866-7c8c005104fb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16dafabd-7338-4eef-8866-7c8c005104fb" (UID: "16dafabd-7338-4eef-8866-7c8c005104fb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.482109 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/492545c6-2cb5-4c78-afe9-29e92d5b70c4-kube-api-access-55858" (OuterVolumeSpecName: "kube-api-access-55858") pod "492545c6-2cb5-4c78-afe9-29e92d5b70c4" (UID: "492545c6-2cb5-4c78-afe9-29e92d5b70c4"). InnerVolumeSpecName "kube-api-access-55858". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.573235 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bab26e39-d459-41cb-b009-09a804261374-config\") pod \"controller-manager-549b6648c8-gf2nm\" (UID: \"bab26e39-d459-41cb-b009-09a804261374\") " pod="openshift-controller-manager/controller-manager-549b6648c8-gf2nm" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.573572 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bb7b07c4-a956-458e-abaa-06dc68b5e359-client-ca\") pod \"route-controller-manager-7f87b57647-4j86r\" (UID: \"bb7b07c4-a956-458e-abaa-06dc68b5e359\") " pod="openshift-route-controller-manager/route-controller-manager-7f87b57647-4j86r" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.573740 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb7b07c4-a956-458e-abaa-06dc68b5e359-serving-cert\") pod \"route-controller-manager-7f87b57647-4j86r\" (UID: \"bb7b07c4-a956-458e-abaa-06dc68b5e359\") " pod="openshift-route-controller-manager/route-controller-manager-7f87b57647-4j86r" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.573869 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb7b07c4-a956-458e-abaa-06dc68b5e359-config\") pod \"route-controller-manager-7f87b57647-4j86r\" (UID: \"bb7b07c4-a956-458e-abaa-06dc68b5e359\") " pod="openshift-route-controller-manager/route-controller-manager-7f87b57647-4j86r" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.573999 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bab26e39-d459-41cb-b009-09a804261374-client-ca\") pod \"controller-manager-549b6648c8-gf2nm\" (UID: \"bab26e39-d459-41cb-b009-09a804261374\") " pod="openshift-controller-manager/controller-manager-549b6648c8-gf2nm" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.574238 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nrvzw\" (UniqueName: \"kubernetes.io/projected/bab26e39-d459-41cb-b009-09a804261374-kube-api-access-nrvzw\") pod \"controller-manager-549b6648c8-gf2nm\" (UID: \"bab26e39-d459-41cb-b009-09a804261374\") " pod="openshift-controller-manager/controller-manager-549b6648c8-gf2nm" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.574626 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bab26e39-d459-41cb-b009-09a804261374-serving-cert\") pod \"controller-manager-549b6648c8-gf2nm\" (UID: \"bab26e39-d459-41cb-b009-09a804261374\") " pod="openshift-controller-manager/controller-manager-549b6648c8-gf2nm" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.574794 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bab26e39-d459-41cb-b009-09a804261374-tmp\") pod \"controller-manager-549b6648c8-gf2nm\" (UID: \"bab26e39-d459-41cb-b009-09a804261374\") " pod="openshift-controller-manager/controller-manager-549b6648c8-gf2nm" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.574959 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bab26e39-d459-41cb-b009-09a804261374-proxy-ca-bundles\") pod \"controller-manager-549b6648c8-gf2nm\" (UID: \"bab26e39-d459-41cb-b009-09a804261374\") " pod="openshift-controller-manager/controller-manager-549b6648c8-gf2nm" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.575133 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bb7b07c4-a956-458e-abaa-06dc68b5e359-tmp\") pod \"route-controller-manager-7f87b57647-4j86r\" (UID: \"bb7b07c4-a956-458e-abaa-06dc68b5e359\") " pod="openshift-route-controller-manager/route-controller-manager-7f87b57647-4j86r" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.575524 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dmc4p\" (UniqueName: \"kubernetes.io/projected/bb7b07c4-a956-458e-abaa-06dc68b5e359-kube-api-access-dmc4p\") pod \"route-controller-manager-7f87b57647-4j86r\" (UID: \"bb7b07c4-a956-458e-abaa-06dc68b5e359\") " pod="openshift-route-controller-manager/route-controller-manager-7f87b57647-4j86r" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.575646 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bab26e39-d459-41cb-b009-09a804261374-tmp\") pod \"controller-manager-549b6648c8-gf2nm\" (UID: \"bab26e39-d459-41cb-b009-09a804261374\") " pod="openshift-controller-manager/controller-manager-549b6648c8-gf2nm" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.575221 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bab26e39-d459-41cb-b009-09a804261374-config\") pod \"controller-manager-549b6648c8-gf2nm\" (UID: \"bab26e39-d459-41cb-b009-09a804261374\") " pod="openshift-controller-manager/controller-manager-549b6648c8-gf2nm" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.575760 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bab26e39-d459-41cb-b009-09a804261374-client-ca\") pod \"controller-manager-549b6648c8-gf2nm\" (UID: \"bab26e39-d459-41cb-b009-09a804261374\") " pod="openshift-controller-manager/controller-manager-549b6648c8-gf2nm" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.576043 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-55858\" (UniqueName: \"kubernetes.io/projected/492545c6-2cb5-4c78-afe9-29e92d5b70c4-kube-api-access-55858\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.576219 5099 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16dafabd-7338-4eef-8866-7c8c005104fb-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.576350 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/16dafabd-7338-4eef-8866-7c8c005104fb-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.576510 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bb7b07c4-a956-458e-abaa-06dc68b5e359-client-ca\") pod \"route-controller-manager-7f87b57647-4j86r\" (UID: \"bb7b07c4-a956-458e-abaa-06dc68b5e359\") " pod="openshift-route-controller-manager/route-controller-manager-7f87b57647-4j86r" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.576517 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/492545c6-2cb5-4c78-afe9-29e92d5b70c4-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.576611 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16dafabd-7338-4eef-8866-7c8c005104fb-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.576627 5099 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/492545c6-2cb5-4c78-afe9-29e92d5b70c4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.576642 5099 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/492545c6-2cb5-4c78-afe9-29e92d5b70c4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.576657 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q5ws7\" (UniqueName: \"kubernetes.io/projected/16dafabd-7338-4eef-8866-7c8c005104fb-kube-api-access-q5ws7\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.576673 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/492545c6-2cb5-4c78-afe9-29e92d5b70c4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.576689 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16dafabd-7338-4eef-8866-7c8c005104fb-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.576708 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/492545c6-2cb5-4c78-afe9-29e92d5b70c4-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.576110 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bb7b07c4-a956-458e-abaa-06dc68b5e359-tmp\") pod \"route-controller-manager-7f87b57647-4j86r\" (UID: \"bb7b07c4-a956-458e-abaa-06dc68b5e359\") " pod="openshift-route-controller-manager/route-controller-manager-7f87b57647-4j86r" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.576788 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bab26e39-d459-41cb-b009-09a804261374-proxy-ca-bundles\") pod \"controller-manager-549b6648c8-gf2nm\" (UID: \"bab26e39-d459-41cb-b009-09a804261374\") " pod="openshift-controller-manager/controller-manager-549b6648c8-gf2nm" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.578144 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb7b07c4-a956-458e-abaa-06dc68b5e359-config\") pod \"route-controller-manager-7f87b57647-4j86r\" (UID: \"bb7b07c4-a956-458e-abaa-06dc68b5e359\") " pod="openshift-route-controller-manager/route-controller-manager-7f87b57647-4j86r" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.580194 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bab26e39-d459-41cb-b009-09a804261374-serving-cert\") pod \"controller-manager-549b6648c8-gf2nm\" (UID: \"bab26e39-d459-41cb-b009-09a804261374\") " pod="openshift-controller-manager/controller-manager-549b6648c8-gf2nm" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.580279 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb7b07c4-a956-458e-abaa-06dc68b5e359-serving-cert\") pod \"route-controller-manager-7f87b57647-4j86r\" (UID: \"bb7b07c4-a956-458e-abaa-06dc68b5e359\") " pod="openshift-route-controller-manager/route-controller-manager-7f87b57647-4j86r" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.596961 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrvzw\" (UniqueName: \"kubernetes.io/projected/bab26e39-d459-41cb-b009-09a804261374-kube-api-access-nrvzw\") pod \"controller-manager-549b6648c8-gf2nm\" (UID: \"bab26e39-d459-41cb-b009-09a804261374\") " pod="openshift-controller-manager/controller-manager-549b6648c8-gf2nm" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.599155 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmc4p\" (UniqueName: \"kubernetes.io/projected/bb7b07c4-a956-458e-abaa-06dc68b5e359-kube-api-access-dmc4p\") pod \"route-controller-manager-7f87b57647-4j86r\" (UID: \"bb7b07c4-a956-458e-abaa-06dc68b5e359\") " pod="openshift-route-controller-manager/route-controller-manager-7f87b57647-4j86r" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.708985 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-549b6648c8-gf2nm" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.739022 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f87b57647-4j86r" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.921004 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-549b6648c8-gf2nm"] Jan 22 14:18:49 crc kubenswrapper[5099]: W0122 14:18:49.938364 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbab26e39_d459_41cb_b009_09a804261374.slice/crio-0ade783951cc8b5e9942a3a8ff1dd27375a729d825d5446cae3876cdb9e563dd WatchSource:0}: Error finding container 0ade783951cc8b5e9942a3a8ff1dd27375a729d825d5446cae3876cdb9e563dd: Status 404 returned error can't find the container with id 0ade783951cc8b5e9942a3a8ff1dd27375a729d825d5446cae3876cdb9e563dd Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.972197 5099 generic.go:358] "Generic (PLEG): container finished" podID="492545c6-2cb5-4c78-afe9-29e92d5b70c4" containerID="c0b2ebdbae690661d46d759bc115dec97c86fe4edf5d33ab62408c9725768fa9" exitCode=0 Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.972272 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-549c648458-h5xck" event={"ID":"492545c6-2cb5-4c78-afe9-29e92d5b70c4","Type":"ContainerDied","Data":"c0b2ebdbae690661d46d759bc115dec97c86fe4edf5d33ab62408c9725768fa9"} Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.972300 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-549c648458-h5xck" event={"ID":"492545c6-2cb5-4c78-afe9-29e92d5b70c4","Type":"ContainerDied","Data":"3f9ae26c21b93c79918a2c66e8656eb1cb7a684633f10c3eab9ffb7a15db9c96"} Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.972317 5099 scope.go:117] "RemoveContainer" containerID="c0b2ebdbae690661d46d759bc115dec97c86fe4edf5d33ab62408c9725768fa9" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.975535 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f87b57647-4j86r"] Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.975615 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-549c648458-h5xck" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.975945 5099 generic.go:358] "Generic (PLEG): container finished" podID="16dafabd-7338-4eef-8866-7c8c005104fb" containerID="2e032c4050218aec8b280c671e35f30d8fe054f1987f92ae0bdf23deea61d46f" exitCode=0 Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.976268 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6444f75685-gkrv2" Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.976965 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6444f75685-gkrv2" event={"ID":"16dafabd-7338-4eef-8866-7c8c005104fb","Type":"ContainerDied","Data":"2e032c4050218aec8b280c671e35f30d8fe054f1987f92ae0bdf23deea61d46f"} Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.977036 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6444f75685-gkrv2" event={"ID":"16dafabd-7338-4eef-8866-7c8c005104fb","Type":"ContainerDied","Data":"e54d68d96e7cfd632296c66fa51186912904b220a3499795f7971d058e7cf4d4"} Jan 22 14:18:49 crc kubenswrapper[5099]: I0122 14:18:49.978801 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-549b6648c8-gf2nm" event={"ID":"bab26e39-d459-41cb-b009-09a804261374","Type":"ContainerStarted","Data":"0ade783951cc8b5e9942a3a8ff1dd27375a729d825d5446cae3876cdb9e563dd"} Jan 22 14:18:50 crc kubenswrapper[5099]: I0122 14:18:50.010343 5099 scope.go:117] "RemoveContainer" containerID="c0b2ebdbae690661d46d759bc115dec97c86fe4edf5d33ab62408c9725768fa9" Jan 22 14:18:50 crc kubenswrapper[5099]: E0122 14:18:50.014907 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0b2ebdbae690661d46d759bc115dec97c86fe4edf5d33ab62408c9725768fa9\": container with ID starting with c0b2ebdbae690661d46d759bc115dec97c86fe4edf5d33ab62408c9725768fa9 not found: ID does not exist" containerID="c0b2ebdbae690661d46d759bc115dec97c86fe4edf5d33ab62408c9725768fa9" Jan 22 14:18:50 crc kubenswrapper[5099]: I0122 14:18:50.014950 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0b2ebdbae690661d46d759bc115dec97c86fe4edf5d33ab62408c9725768fa9"} err="failed to get container status \"c0b2ebdbae690661d46d759bc115dec97c86fe4edf5d33ab62408c9725768fa9\": rpc error: code = NotFound desc = could not find container \"c0b2ebdbae690661d46d759bc115dec97c86fe4edf5d33ab62408c9725768fa9\": container with ID starting with c0b2ebdbae690661d46d759bc115dec97c86fe4edf5d33ab62408c9725768fa9 not found: ID does not exist" Jan 22 14:18:50 crc kubenswrapper[5099]: I0122 14:18:50.014973 5099 scope.go:117] "RemoveContainer" containerID="2e032c4050218aec8b280c671e35f30d8fe054f1987f92ae0bdf23deea61d46f" Jan 22 14:18:50 crc kubenswrapper[5099]: I0122 14:18:50.015758 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-549c648458-h5xck"] Jan 22 14:18:50 crc kubenswrapper[5099]: I0122 14:18:50.021704 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-549c648458-h5xck"] Jan 22 14:18:50 crc kubenswrapper[5099]: I0122 14:18:50.025941 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6444f75685-gkrv2"] Jan 22 14:18:50 crc kubenswrapper[5099]: I0122 14:18:50.029646 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6444f75685-gkrv2"] Jan 22 14:18:50 crc kubenswrapper[5099]: I0122 14:18:50.038183 5099 scope.go:117] "RemoveContainer" containerID="2e032c4050218aec8b280c671e35f30d8fe054f1987f92ae0bdf23deea61d46f" Jan 22 14:18:50 crc kubenswrapper[5099]: E0122 14:18:50.038663 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e032c4050218aec8b280c671e35f30d8fe054f1987f92ae0bdf23deea61d46f\": container with ID starting with 2e032c4050218aec8b280c671e35f30d8fe054f1987f92ae0bdf23deea61d46f not found: ID does not exist" containerID="2e032c4050218aec8b280c671e35f30d8fe054f1987f92ae0bdf23deea61d46f" Jan 22 14:18:50 crc kubenswrapper[5099]: I0122 14:18:50.038710 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e032c4050218aec8b280c671e35f30d8fe054f1987f92ae0bdf23deea61d46f"} err="failed to get container status \"2e032c4050218aec8b280c671e35f30d8fe054f1987f92ae0bdf23deea61d46f\": rpc error: code = NotFound desc = could not find container \"2e032c4050218aec8b280c671e35f30d8fe054f1987f92ae0bdf23deea61d46f\": container with ID starting with 2e032c4050218aec8b280c671e35f30d8fe054f1987f92ae0bdf23deea61d46f not found: ID does not exist" Jan 22 14:18:50 crc kubenswrapper[5099]: I0122 14:18:50.770709 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16dafabd-7338-4eef-8866-7c8c005104fb" path="/var/lib/kubelet/pods/16dafabd-7338-4eef-8866-7c8c005104fb/volumes" Jan 22 14:18:50 crc kubenswrapper[5099]: I0122 14:18:50.771415 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="492545c6-2cb5-4c78-afe9-29e92d5b70c4" path="/var/lib/kubelet/pods/492545c6-2cb5-4c78-afe9-29e92d5b70c4/volumes" Jan 22 14:18:50 crc kubenswrapper[5099]: I0122 14:18:50.984721 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-549b6648c8-gf2nm" event={"ID":"bab26e39-d459-41cb-b009-09a804261374","Type":"ContainerStarted","Data":"3587b9126625dc1b0ae24445bde61fe29fb9c2dea98872e891fcb80f0f735457"} Jan 22 14:18:50 crc kubenswrapper[5099]: I0122 14:18:50.985373 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-549b6648c8-gf2nm" Jan 22 14:18:50 crc kubenswrapper[5099]: I0122 14:18:50.990815 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f87b57647-4j86r" event={"ID":"bb7b07c4-a956-458e-abaa-06dc68b5e359","Type":"ContainerStarted","Data":"f8482c34725b83531e45946d7b3d5fcc07b2bc762fe26f6267d19ba903d33711"} Jan 22 14:18:50 crc kubenswrapper[5099]: I0122 14:18:50.990864 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-7f87b57647-4j86r" Jan 22 14:18:50 crc kubenswrapper[5099]: I0122 14:18:50.990881 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f87b57647-4j86r" event={"ID":"bb7b07c4-a956-458e-abaa-06dc68b5e359","Type":"ContainerStarted","Data":"bbf5c6314168fa717cfb7f8a84d41b4b842535a5e24216b9580bd43aabb6ceb1"} Jan 22 14:18:50 crc kubenswrapper[5099]: I0122 14:18:50.995082 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-549b6648c8-gf2nm" Jan 22 14:18:50 crc kubenswrapper[5099]: I0122 14:18:50.995805 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7f87b57647-4j86r" Jan 22 14:18:51 crc kubenswrapper[5099]: I0122 14:18:51.007287 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-549b6648c8-gf2nm" podStartSLOduration=4.007264973 podStartE2EDuration="4.007264973s" podCreationTimestamp="2026-01-22 14:18:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:18:51.00562291 +0000 UTC m=+288.713373147" watchObservedRunningTime="2026-01-22 14:18:51.007264973 +0000 UTC m=+288.715015210" Jan 22 14:18:51 crc kubenswrapper[5099]: I0122 14:18:51.032694 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7f87b57647-4j86r" podStartSLOduration=4.032671214 podStartE2EDuration="4.032671214s" podCreationTimestamp="2026-01-22 14:18:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:18:51.026503263 +0000 UTC m=+288.734253520" watchObservedRunningTime="2026-01-22 14:18:51.032671214 +0000 UTC m=+288.740421451" Jan 22 14:19:01 crc kubenswrapper[5099]: I0122 14:19:01.871701 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f87b57647-4j86r"] Jan 22 14:19:01 crc kubenswrapper[5099]: I0122 14:19:01.872425 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7f87b57647-4j86r" podUID="bb7b07c4-a956-458e-abaa-06dc68b5e359" containerName="route-controller-manager" containerID="cri-o://f8482c34725b83531e45946d7b3d5fcc07b2bc762fe26f6267d19ba903d33711" gracePeriod=30 Jan 22 14:19:02 crc kubenswrapper[5099]: I0122 14:19:02.072149 5099 generic.go:358] "Generic (PLEG): container finished" podID="bb7b07c4-a956-458e-abaa-06dc68b5e359" containerID="f8482c34725b83531e45946d7b3d5fcc07b2bc762fe26f6267d19ba903d33711" exitCode=0 Jan 22 14:19:02 crc kubenswrapper[5099]: I0122 14:19:02.072294 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f87b57647-4j86r" event={"ID":"bb7b07c4-a956-458e-abaa-06dc68b5e359","Type":"ContainerDied","Data":"f8482c34725b83531e45946d7b3d5fcc07b2bc762fe26f6267d19ba903d33711"} Jan 22 14:19:02 crc kubenswrapper[5099]: I0122 14:19:02.334961 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f87b57647-4j86r" Jan 22 14:19:02 crc kubenswrapper[5099]: I0122 14:19:02.368317 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6444f75685-5m5n8"] Jan 22 14:19:02 crc kubenswrapper[5099]: I0122 14:19:02.369068 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bb7b07c4-a956-458e-abaa-06dc68b5e359" containerName="route-controller-manager" Jan 22 14:19:02 crc kubenswrapper[5099]: I0122 14:19:02.369090 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb7b07c4-a956-458e-abaa-06dc68b5e359" containerName="route-controller-manager" Jan 22 14:19:02 crc kubenswrapper[5099]: I0122 14:19:02.369209 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="bb7b07c4-a956-458e-abaa-06dc68b5e359" containerName="route-controller-manager" Jan 22 14:19:02 crc kubenswrapper[5099]: I0122 14:19:02.381319 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6444f75685-5m5n8" Jan 22 14:19:02 crc kubenswrapper[5099]: I0122 14:19:02.383575 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6444f75685-5m5n8"] Jan 22 14:19:02 crc kubenswrapper[5099]: I0122 14:19:02.482953 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bb7b07c4-a956-458e-abaa-06dc68b5e359-tmp\") pod \"bb7b07c4-a956-458e-abaa-06dc68b5e359\" (UID: \"bb7b07c4-a956-458e-abaa-06dc68b5e359\") " Jan 22 14:19:02 crc kubenswrapper[5099]: I0122 14:19:02.483043 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb7b07c4-a956-458e-abaa-06dc68b5e359-serving-cert\") pod \"bb7b07c4-a956-458e-abaa-06dc68b5e359\" (UID: \"bb7b07c4-a956-458e-abaa-06dc68b5e359\") " Jan 22 14:19:02 crc kubenswrapper[5099]: I0122 14:19:02.483343 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmc4p\" (UniqueName: \"kubernetes.io/projected/bb7b07c4-a956-458e-abaa-06dc68b5e359-kube-api-access-dmc4p\") pod \"bb7b07c4-a956-458e-abaa-06dc68b5e359\" (UID: \"bb7b07c4-a956-458e-abaa-06dc68b5e359\") " Jan 22 14:19:02 crc kubenswrapper[5099]: I0122 14:19:02.483422 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb7b07c4-a956-458e-abaa-06dc68b5e359-config\") pod \"bb7b07c4-a956-458e-abaa-06dc68b5e359\" (UID: \"bb7b07c4-a956-458e-abaa-06dc68b5e359\") " Jan 22 14:19:02 crc kubenswrapper[5099]: I0122 14:19:02.483631 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bb7b07c4-a956-458e-abaa-06dc68b5e359-client-ca\") pod \"bb7b07c4-a956-458e-abaa-06dc68b5e359\" (UID: \"bb7b07c4-a956-458e-abaa-06dc68b5e359\") " Jan 22 14:19:02 crc kubenswrapper[5099]: I0122 14:19:02.484204 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b51fe296-ec56-4a5b-bacf-c2c7afc26dc2-config\") pod \"route-controller-manager-6444f75685-5m5n8\" (UID: \"b51fe296-ec56-4a5b-bacf-c2c7afc26dc2\") " pod="openshift-route-controller-manager/route-controller-manager-6444f75685-5m5n8" Jan 22 14:19:02 crc kubenswrapper[5099]: I0122 14:19:02.484135 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb7b07c4-a956-458e-abaa-06dc68b5e359-tmp" (OuterVolumeSpecName: "tmp") pod "bb7b07c4-a956-458e-abaa-06dc68b5e359" (UID: "bb7b07c4-a956-458e-abaa-06dc68b5e359"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:19:02 crc kubenswrapper[5099]: I0122 14:19:02.484317 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp94k\" (UniqueName: \"kubernetes.io/projected/b51fe296-ec56-4a5b-bacf-c2c7afc26dc2-kube-api-access-hp94k\") pod \"route-controller-manager-6444f75685-5m5n8\" (UID: \"b51fe296-ec56-4a5b-bacf-c2c7afc26dc2\") " pod="openshift-route-controller-manager/route-controller-manager-6444f75685-5m5n8" Jan 22 14:19:02 crc kubenswrapper[5099]: I0122 14:19:02.484497 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b51fe296-ec56-4a5b-bacf-c2c7afc26dc2-tmp\") pod \"route-controller-manager-6444f75685-5m5n8\" (UID: \"b51fe296-ec56-4a5b-bacf-c2c7afc26dc2\") " pod="openshift-route-controller-manager/route-controller-manager-6444f75685-5m5n8" Jan 22 14:19:02 crc kubenswrapper[5099]: I0122 14:19:02.484617 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b51fe296-ec56-4a5b-bacf-c2c7afc26dc2-serving-cert\") pod \"route-controller-manager-6444f75685-5m5n8\" (UID: \"b51fe296-ec56-4a5b-bacf-c2c7afc26dc2\") " pod="openshift-route-controller-manager/route-controller-manager-6444f75685-5m5n8" Jan 22 14:19:02 crc kubenswrapper[5099]: I0122 14:19:02.484616 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb7b07c4-a956-458e-abaa-06dc68b5e359-client-ca" (OuterVolumeSpecName: "client-ca") pod "bb7b07c4-a956-458e-abaa-06dc68b5e359" (UID: "bb7b07c4-a956-458e-abaa-06dc68b5e359"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:19:02 crc kubenswrapper[5099]: I0122 14:19:02.484649 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb7b07c4-a956-458e-abaa-06dc68b5e359-config" (OuterVolumeSpecName: "config") pod "bb7b07c4-a956-458e-abaa-06dc68b5e359" (UID: "bb7b07c4-a956-458e-abaa-06dc68b5e359"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:19:02 crc kubenswrapper[5099]: I0122 14:19:02.484751 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b51fe296-ec56-4a5b-bacf-c2c7afc26dc2-client-ca\") pod \"route-controller-manager-6444f75685-5m5n8\" (UID: \"b51fe296-ec56-4a5b-bacf-c2c7afc26dc2\") " pod="openshift-route-controller-manager/route-controller-manager-6444f75685-5m5n8" Jan 22 14:19:02 crc kubenswrapper[5099]: I0122 14:19:02.484982 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb7b07c4-a956-458e-abaa-06dc68b5e359-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:02 crc kubenswrapper[5099]: I0122 14:19:02.485005 5099 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bb7b07c4-a956-458e-abaa-06dc68b5e359-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:02 crc kubenswrapper[5099]: I0122 14:19:02.485024 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bb7b07c4-a956-458e-abaa-06dc68b5e359-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:02 crc kubenswrapper[5099]: I0122 14:19:02.490932 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb7b07c4-a956-458e-abaa-06dc68b5e359-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bb7b07c4-a956-458e-abaa-06dc68b5e359" (UID: "bb7b07c4-a956-458e-abaa-06dc68b5e359"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:19:02 crc kubenswrapper[5099]: I0122 14:19:02.490987 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb7b07c4-a956-458e-abaa-06dc68b5e359-kube-api-access-dmc4p" (OuterVolumeSpecName: "kube-api-access-dmc4p") pod "bb7b07c4-a956-458e-abaa-06dc68b5e359" (UID: "bb7b07c4-a956-458e-abaa-06dc68b5e359"). InnerVolumeSpecName "kube-api-access-dmc4p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:19:02 crc kubenswrapper[5099]: I0122 14:19:02.586558 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hp94k\" (UniqueName: \"kubernetes.io/projected/b51fe296-ec56-4a5b-bacf-c2c7afc26dc2-kube-api-access-hp94k\") pod \"route-controller-manager-6444f75685-5m5n8\" (UID: \"b51fe296-ec56-4a5b-bacf-c2c7afc26dc2\") " pod="openshift-route-controller-manager/route-controller-manager-6444f75685-5m5n8" Jan 22 14:19:02 crc kubenswrapper[5099]: I0122 14:19:02.586628 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b51fe296-ec56-4a5b-bacf-c2c7afc26dc2-tmp\") pod \"route-controller-manager-6444f75685-5m5n8\" (UID: \"b51fe296-ec56-4a5b-bacf-c2c7afc26dc2\") " pod="openshift-route-controller-manager/route-controller-manager-6444f75685-5m5n8" Jan 22 14:19:02 crc kubenswrapper[5099]: I0122 14:19:02.586679 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b51fe296-ec56-4a5b-bacf-c2c7afc26dc2-serving-cert\") pod \"route-controller-manager-6444f75685-5m5n8\" (UID: \"b51fe296-ec56-4a5b-bacf-c2c7afc26dc2\") " pod="openshift-route-controller-manager/route-controller-manager-6444f75685-5m5n8" Jan 22 14:19:02 crc kubenswrapper[5099]: I0122 14:19:02.586725 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b51fe296-ec56-4a5b-bacf-c2c7afc26dc2-client-ca\") pod \"route-controller-manager-6444f75685-5m5n8\" (UID: \"b51fe296-ec56-4a5b-bacf-c2c7afc26dc2\") " pod="openshift-route-controller-manager/route-controller-manager-6444f75685-5m5n8" Jan 22 14:19:02 crc kubenswrapper[5099]: I0122 14:19:02.586823 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b51fe296-ec56-4a5b-bacf-c2c7afc26dc2-config\") pod \"route-controller-manager-6444f75685-5m5n8\" (UID: \"b51fe296-ec56-4a5b-bacf-c2c7afc26dc2\") " pod="openshift-route-controller-manager/route-controller-manager-6444f75685-5m5n8" Jan 22 14:19:02 crc kubenswrapper[5099]: I0122 14:19:02.586878 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dmc4p\" (UniqueName: \"kubernetes.io/projected/bb7b07c4-a956-458e-abaa-06dc68b5e359-kube-api-access-dmc4p\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:02 crc kubenswrapper[5099]: I0122 14:19:02.586891 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb7b07c4-a956-458e-abaa-06dc68b5e359-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:02 crc kubenswrapper[5099]: I0122 14:19:02.587503 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b51fe296-ec56-4a5b-bacf-c2c7afc26dc2-tmp\") pod \"route-controller-manager-6444f75685-5m5n8\" (UID: \"b51fe296-ec56-4a5b-bacf-c2c7afc26dc2\") " pod="openshift-route-controller-manager/route-controller-manager-6444f75685-5m5n8" Jan 22 14:19:02 crc kubenswrapper[5099]: I0122 14:19:02.588343 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b51fe296-ec56-4a5b-bacf-c2c7afc26dc2-config\") pod \"route-controller-manager-6444f75685-5m5n8\" (UID: \"b51fe296-ec56-4a5b-bacf-c2c7afc26dc2\") " pod="openshift-route-controller-manager/route-controller-manager-6444f75685-5m5n8" Jan 22 14:19:02 crc kubenswrapper[5099]: I0122 14:19:02.589239 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b51fe296-ec56-4a5b-bacf-c2c7afc26dc2-client-ca\") pod \"route-controller-manager-6444f75685-5m5n8\" (UID: \"b51fe296-ec56-4a5b-bacf-c2c7afc26dc2\") " pod="openshift-route-controller-manager/route-controller-manager-6444f75685-5m5n8" Jan 22 14:19:02 crc kubenswrapper[5099]: I0122 14:19:02.592621 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b51fe296-ec56-4a5b-bacf-c2c7afc26dc2-serving-cert\") pod \"route-controller-manager-6444f75685-5m5n8\" (UID: \"b51fe296-ec56-4a5b-bacf-c2c7afc26dc2\") " pod="openshift-route-controller-manager/route-controller-manager-6444f75685-5m5n8" Jan 22 14:19:02 crc kubenswrapper[5099]: I0122 14:19:02.617939 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp94k\" (UniqueName: \"kubernetes.io/projected/b51fe296-ec56-4a5b-bacf-c2c7afc26dc2-kube-api-access-hp94k\") pod \"route-controller-manager-6444f75685-5m5n8\" (UID: \"b51fe296-ec56-4a5b-bacf-c2c7afc26dc2\") " pod="openshift-route-controller-manager/route-controller-manager-6444f75685-5m5n8" Jan 22 14:19:02 crc kubenswrapper[5099]: I0122 14:19:02.707259 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6444f75685-5m5n8" Jan 22 14:19:02 crc kubenswrapper[5099]: I0122 14:19:02.980972 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 14:19:03 crc kubenswrapper[5099]: I0122 14:19:03.000549 5099 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 14:19:03 crc kubenswrapper[5099]: I0122 14:19:03.083670 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f87b57647-4j86r" event={"ID":"bb7b07c4-a956-458e-abaa-06dc68b5e359","Type":"ContainerDied","Data":"bbf5c6314168fa717cfb7f8a84d41b4b842535a5e24216b9580bd43aabb6ceb1"} Jan 22 14:19:03 crc kubenswrapper[5099]: I0122 14:19:03.083769 5099 scope.go:117] "RemoveContainer" containerID="f8482c34725b83531e45946d7b3d5fcc07b2bc762fe26f6267d19ba903d33711" Jan 22 14:19:03 crc kubenswrapper[5099]: I0122 14:19:03.083820 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f87b57647-4j86r" Jan 22 14:19:03 crc kubenswrapper[5099]: E0122 14:19:03.088266 5099 kuberuntime_gc.go:389] "Failed to remove container log dead symlink" err="remove /var/log/containers/route-controller-manager-7f87b57647-4j86r_openshift-route-controller-manager_route-controller-manager-f8482c34725b83531e45946d7b3d5fcc07b2bc762fe26f6267d19ba903d33711.log: no such file or directory" path="/var/log/containers/route-controller-manager-7f87b57647-4j86r_openshift-route-controller-manager_route-controller-manager-f8482c34725b83531e45946d7b3d5fcc07b2bc762fe26f6267d19ba903d33711.log" Jan 22 14:19:03 crc kubenswrapper[5099]: I0122 14:19:03.106618 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f87b57647-4j86r"] Jan 22 14:19:03 crc kubenswrapper[5099]: I0122 14:19:03.112470 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f87b57647-4j86r"] Jan 22 14:19:03 crc kubenswrapper[5099]: I0122 14:19:03.162522 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6444f75685-5m5n8"] Jan 22 14:19:03 crc kubenswrapper[5099]: W0122 14:19:03.169083 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb51fe296_ec56_4a5b_bacf_c2c7afc26dc2.slice/crio-8523607700fda5fe4ccebe5727a5122f7fb168b36d480df33224544db42c6065 WatchSource:0}: Error finding container 8523607700fda5fe4ccebe5727a5122f7fb168b36d480df33224544db42c6065: Status 404 returned error can't find the container with id 8523607700fda5fe4ccebe5727a5122f7fb168b36d480df33224544db42c6065 Jan 22 14:19:04 crc kubenswrapper[5099]: I0122 14:19:04.095266 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6444f75685-5m5n8" event={"ID":"b51fe296-ec56-4a5b-bacf-c2c7afc26dc2","Type":"ContainerStarted","Data":"ad3244868c12320b17c4cf4a65932fc3ef3752a31c31e13e06e29f4b8c10e698"} Jan 22 14:19:04 crc kubenswrapper[5099]: I0122 14:19:04.095352 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6444f75685-5m5n8" event={"ID":"b51fe296-ec56-4a5b-bacf-c2c7afc26dc2","Type":"ContainerStarted","Data":"8523607700fda5fe4ccebe5727a5122f7fb168b36d480df33224544db42c6065"} Jan 22 14:19:04 crc kubenswrapper[5099]: I0122 14:19:04.095744 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-6444f75685-5m5n8" Jan 22 14:19:04 crc kubenswrapper[5099]: I0122 14:19:04.103290 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6444f75685-5m5n8" Jan 22 14:19:04 crc kubenswrapper[5099]: I0122 14:19:04.123874 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6444f75685-5m5n8" podStartSLOduration=3.123844262 podStartE2EDuration="3.123844262s" podCreationTimestamp="2026-01-22 14:19:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:19:04.119348865 +0000 UTC m=+301.827099112" watchObservedRunningTime="2026-01-22 14:19:04.123844262 +0000 UTC m=+301.831594499" Jan 22 14:19:04 crc kubenswrapper[5099]: I0122 14:19:04.769057 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb7b07c4-a956-458e-abaa-06dc68b5e359" path="/var/lib/kubelet/pods/bb7b07c4-a956-458e-abaa-06dc68b5e359/volumes" Jan 22 14:19:05 crc kubenswrapper[5099]: E0122 14:19:05.423374 5099 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb7b07c4_a956_458e_abaa_06dc68b5e359.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb7b07c4_a956_458e_abaa_06dc68b5e359.slice/crio-bbf5c6314168fa717cfb7f8a84d41b4b842535a5e24216b9580bd43aabb6ceb1\": RecentStats: unable to find data in memory cache]" Jan 22 14:19:15 crc kubenswrapper[5099]: E0122 14:19:15.571808 5099 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb7b07c4_a956_458e_abaa_06dc68b5e359.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb7b07c4_a956_458e_abaa_06dc68b5e359.slice/crio-bbf5c6314168fa717cfb7f8a84d41b4b842535a5e24216b9580bd43aabb6ceb1\": RecentStats: unable to find data in memory cache]" Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.131463 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gj5rr"] Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.133229 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gj5rr" podUID="b3266538-9050-43ad-a3d6-7428f83aa788" containerName="registry-server" containerID="cri-o://e2e79a21b653fa291a83ffbd0afbf6aef0c182483e0edc63548d01f498d0318b" gracePeriod=30 Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.147404 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jcl5d"] Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.147833 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jcl5d" podUID="fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5" containerName="registry-server" containerID="cri-o://62533b9c54a55bfe9392ffcee50f3c4d61f22d499560e1d25dab231a56766b1f" gracePeriod=30 Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.163107 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-9nglq"] Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.163539 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-9nglq" podUID="ad87e7e8-19c1-4c92-9400-9873a85e80b4" containerName="marketplace-operator" containerID="cri-o://87f317a618386b949ca622d41a46e6b8feabcbcc205992cfe9c9eaf6d7d3ebde" gracePeriod=30 Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.187997 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7vcn"] Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.188655 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k7vcn" podUID="83ff52d5-f127-494f-b2bd-e9a98e556392" containerName="registry-server" containerID="cri-o://3d4d20ace4b753579568eceacae90a039b99e88e71a31847cf092dcc0f046f93" gracePeriod=30 Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.192161 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-79vth"] Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.192651 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-79vth" podUID="2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe" containerName="registry-server" containerID="cri-o://7a5610d6ce5903652f450a646ba9a4e976a3a9fbd9cde3f249e070dc9b370ca8" gracePeriod=30 Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.206626 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-j57c2"] Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.271523 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-j57c2"] Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.271856 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-j57c2" Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.410811 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k599h\" (UniqueName: \"kubernetes.io/projected/72bbdeee-518a-4576-b7a2-5e89e0ae701f-kube-api-access-k599h\") pod \"marketplace-operator-547dbd544d-j57c2\" (UID: \"72bbdeee-518a-4576-b7a2-5e89e0ae701f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-j57c2" Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.411295 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/72bbdeee-518a-4576-b7a2-5e89e0ae701f-tmp\") pod \"marketplace-operator-547dbd544d-j57c2\" (UID: \"72bbdeee-518a-4576-b7a2-5e89e0ae701f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-j57c2" Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.411333 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/72bbdeee-518a-4576-b7a2-5e89e0ae701f-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-j57c2\" (UID: \"72bbdeee-518a-4576-b7a2-5e89e0ae701f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-j57c2" Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.411357 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72bbdeee-518a-4576-b7a2-5e89e0ae701f-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-j57c2\" (UID: \"72bbdeee-518a-4576-b7a2-5e89e0ae701f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-j57c2" Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.513078 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/72bbdeee-518a-4576-b7a2-5e89e0ae701f-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-j57c2\" (UID: \"72bbdeee-518a-4576-b7a2-5e89e0ae701f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-j57c2" Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.513134 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72bbdeee-518a-4576-b7a2-5e89e0ae701f-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-j57c2\" (UID: \"72bbdeee-518a-4576-b7a2-5e89e0ae701f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-j57c2" Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.513887 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k599h\" (UniqueName: \"kubernetes.io/projected/72bbdeee-518a-4576-b7a2-5e89e0ae701f-kube-api-access-k599h\") pod \"marketplace-operator-547dbd544d-j57c2\" (UID: \"72bbdeee-518a-4576-b7a2-5e89e0ae701f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-j57c2" Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.513949 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/72bbdeee-518a-4576-b7a2-5e89e0ae701f-tmp\") pod \"marketplace-operator-547dbd544d-j57c2\" (UID: \"72bbdeee-518a-4576-b7a2-5e89e0ae701f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-j57c2" Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.514562 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/72bbdeee-518a-4576-b7a2-5e89e0ae701f-tmp\") pod \"marketplace-operator-547dbd544d-j57c2\" (UID: \"72bbdeee-518a-4576-b7a2-5e89e0ae701f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-j57c2" Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.515697 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72bbdeee-518a-4576-b7a2-5e89e0ae701f-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-j57c2\" (UID: \"72bbdeee-518a-4576-b7a2-5e89e0ae701f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-j57c2" Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.522999 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/72bbdeee-518a-4576-b7a2-5e89e0ae701f-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-j57c2\" (UID: \"72bbdeee-518a-4576-b7a2-5e89e0ae701f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-j57c2" Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.529972 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k599h\" (UniqueName: \"kubernetes.io/projected/72bbdeee-518a-4576-b7a2-5e89e0ae701f-kube-api-access-k599h\") pod \"marketplace-operator-547dbd544d-j57c2\" (UID: \"72bbdeee-518a-4576-b7a2-5e89e0ae701f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-j57c2" Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.694499 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-j57c2" Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.711018 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k7vcn" Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.793826 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-9nglq" Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.824921 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knkkd\" (UniqueName: \"kubernetes.io/projected/83ff52d5-f127-494f-b2bd-e9a98e556392-kube-api-access-knkkd\") pod \"83ff52d5-f127-494f-b2bd-e9a98e556392\" (UID: \"83ff52d5-f127-494f-b2bd-e9a98e556392\") " Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.825612 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83ff52d5-f127-494f-b2bd-e9a98e556392-catalog-content\") pod \"83ff52d5-f127-494f-b2bd-e9a98e556392\" (UID: \"83ff52d5-f127-494f-b2bd-e9a98e556392\") " Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.825691 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83ff52d5-f127-494f-b2bd-e9a98e556392-utilities\") pod \"83ff52d5-f127-494f-b2bd-e9a98e556392\" (UID: \"83ff52d5-f127-494f-b2bd-e9a98e556392\") " Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.827435 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83ff52d5-f127-494f-b2bd-e9a98e556392-utilities" (OuterVolumeSpecName: "utilities") pod "83ff52d5-f127-494f-b2bd-e9a98e556392" (UID: "83ff52d5-f127-494f-b2bd-e9a98e556392"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.831842 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-79vth" Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.833651 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83ff52d5-f127-494f-b2bd-e9a98e556392-kube-api-access-knkkd" (OuterVolumeSpecName: "kube-api-access-knkkd") pod "83ff52d5-f127-494f-b2bd-e9a98e556392" (UID: "83ff52d5-f127-494f-b2bd-e9a98e556392"). InnerVolumeSpecName "kube-api-access-knkkd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.849450 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83ff52d5-f127-494f-b2bd-e9a98e556392-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83ff52d5-f127-494f-b2bd-e9a98e556392" (UID: "83ff52d5-f127-494f-b2bd-e9a98e556392"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.926995 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ad87e7e8-19c1-4c92-9400-9873a85e80b4-marketplace-operator-metrics\") pod \"ad87e7e8-19c1-4c92-9400-9873a85e80b4\" (UID: \"ad87e7e8-19c1-4c92-9400-9873a85e80b4\") " Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.927121 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad87e7e8-19c1-4c92-9400-9873a85e80b4-marketplace-trusted-ca\") pod \"ad87e7e8-19c1-4c92-9400-9873a85e80b4\" (UID: \"ad87e7e8-19c1-4c92-9400-9873a85e80b4\") " Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.927152 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g95w7\" (UniqueName: \"kubernetes.io/projected/ad87e7e8-19c1-4c92-9400-9873a85e80b4-kube-api-access-g95w7\") pod \"ad87e7e8-19c1-4c92-9400-9873a85e80b4\" (UID: \"ad87e7e8-19c1-4c92-9400-9873a85e80b4\") " Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.927216 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9wgl\" (UniqueName: \"kubernetes.io/projected/2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe-kube-api-access-s9wgl\") pod \"2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe\" (UID: \"2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe\") " Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.927284 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe-utilities\") pod \"2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe\" (UID: \"2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe\") " Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.927335 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe-catalog-content\") pod \"2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe\" (UID: \"2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe\") " Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.927365 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ad87e7e8-19c1-4c92-9400-9873a85e80b4-tmp\") pod \"ad87e7e8-19c1-4c92-9400-9873a85e80b4\" (UID: \"ad87e7e8-19c1-4c92-9400-9873a85e80b4\") " Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.927953 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad87e7e8-19c1-4c92-9400-9873a85e80b4-tmp" (OuterVolumeSpecName: "tmp") pod "ad87e7e8-19c1-4c92-9400-9873a85e80b4" (UID: "ad87e7e8-19c1-4c92-9400-9873a85e80b4"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.928489 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe-utilities" (OuterVolumeSpecName: "utilities") pod "2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe" (UID: "2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.928541 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad87e7e8-19c1-4c92-9400-9873a85e80b4-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "ad87e7e8-19c1-4c92-9400-9873a85e80b4" (UID: "ad87e7e8-19c1-4c92-9400-9873a85e80b4"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.933650 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe-kube-api-access-s9wgl" (OuterVolumeSpecName: "kube-api-access-s9wgl") pod "2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe" (UID: "2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe"). InnerVolumeSpecName "kube-api-access-s9wgl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.933829 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad87e7e8-19c1-4c92-9400-9873a85e80b4-kube-api-access-g95w7" (OuterVolumeSpecName: "kube-api-access-g95w7") pod "ad87e7e8-19c1-4c92-9400-9873a85e80b4" (UID: "ad87e7e8-19c1-4c92-9400-9873a85e80b4"). InnerVolumeSpecName "kube-api-access-g95w7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.933924 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad87e7e8-19c1-4c92-9400-9873a85e80b4-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "ad87e7e8-19c1-4c92-9400-9873a85e80b4" (UID: "ad87e7e8-19c1-4c92-9400-9873a85e80b4"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.942961 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83ff52d5-f127-494f-b2bd-e9a98e556392-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.943004 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.943018 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83ff52d5-f127-494f-b2bd-e9a98e556392-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.943030 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ad87e7e8-19c1-4c92-9400-9873a85e80b4-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.943079 5099 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ad87e7e8-19c1-4c92-9400-9873a85e80b4-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.943095 5099 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad87e7e8-19c1-4c92-9400-9873a85e80b4-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.943107 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g95w7\" (UniqueName: \"kubernetes.io/projected/ad87e7e8-19c1-4c92-9400-9873a85e80b4-kube-api-access-g95w7\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.943120 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-knkkd\" (UniqueName: \"kubernetes.io/projected/83ff52d5-f127-494f-b2bd-e9a98e556392-kube-api-access-knkkd\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:17 crc kubenswrapper[5099]: I0122 14:19:17.943132 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s9wgl\" (UniqueName: \"kubernetes.io/projected/2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe-kube-api-access-s9wgl\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.023776 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe" (UID: "2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.046025 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.152316 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-j57c2"] Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.167963 5099 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.203441 5099 generic.go:358] "Generic (PLEG): container finished" podID="b3266538-9050-43ad-a3d6-7428f83aa788" containerID="e2e79a21b653fa291a83ffbd0afbf6aef0c182483e0edc63548d01f498d0318b" exitCode=0 Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.203508 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gj5rr" event={"ID":"b3266538-9050-43ad-a3d6-7428f83aa788","Type":"ContainerDied","Data":"e2e79a21b653fa291a83ffbd0afbf6aef0c182483e0edc63548d01f498d0318b"} Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.209446 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jcl5d" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.211346 5099 generic.go:358] "Generic (PLEG): container finished" podID="83ff52d5-f127-494f-b2bd-e9a98e556392" containerID="3d4d20ace4b753579568eceacae90a039b99e88e71a31847cf092dcc0f046f93" exitCode=0 Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.211437 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k7vcn" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.211507 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7vcn" event={"ID":"83ff52d5-f127-494f-b2bd-e9a98e556392","Type":"ContainerDied","Data":"3d4d20ace4b753579568eceacae90a039b99e88e71a31847cf092dcc0f046f93"} Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.211540 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7vcn" event={"ID":"83ff52d5-f127-494f-b2bd-e9a98e556392","Type":"ContainerDied","Data":"18316826833ea9a60b2567baea3eaf70298a599a266fa7865f29018033241963"} Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.211563 5099 scope.go:117] "RemoveContainer" containerID="3d4d20ace4b753579568eceacae90a039b99e88e71a31847cf092dcc0f046f93" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.216360 5099 generic.go:358] "Generic (PLEG): container finished" podID="fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5" containerID="62533b9c54a55bfe9392ffcee50f3c4d61f22d499560e1d25dab231a56766b1f" exitCode=0 Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.216539 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jcl5d" event={"ID":"fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5","Type":"ContainerDied","Data":"62533b9c54a55bfe9392ffcee50f3c4d61f22d499560e1d25dab231a56766b1f"} Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.216700 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jcl5d" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.220899 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-j57c2" event={"ID":"72bbdeee-518a-4576-b7a2-5e89e0ae701f","Type":"ContainerStarted","Data":"07a946755062a9561fea5ca4e771537de58af1f0f6e7ef0383a3263d15827542"} Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.221205 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gj5rr" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.223577 5099 generic.go:358] "Generic (PLEG): container finished" podID="2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe" containerID="7a5610d6ce5903652f450a646ba9a4e976a3a9fbd9cde3f249e070dc9b370ca8" exitCode=0 Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.223669 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-79vth" event={"ID":"2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe","Type":"ContainerDied","Data":"7a5610d6ce5903652f450a646ba9a4e976a3a9fbd9cde3f249e070dc9b370ca8"} Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.223690 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-79vth" event={"ID":"2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe","Type":"ContainerDied","Data":"7d39ba3c6874429dbaa60d19eb7873e20874319a39084d2ea71680c990460c28"} Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.223822 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-79vth" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.241344 5099 scope.go:117] "RemoveContainer" containerID="e6653d3f379301bd29eb0ccbcfa53e558de537ba1698457cf43f4ecec43f78cc" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.243987 5099 generic.go:358] "Generic (PLEG): container finished" podID="ad87e7e8-19c1-4c92-9400-9873a85e80b4" containerID="87f317a618386b949ca622d41a46e6b8feabcbcc205992cfe9c9eaf6d7d3ebde" exitCode=0 Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.244061 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-9nglq" event={"ID":"ad87e7e8-19c1-4c92-9400-9873a85e80b4","Type":"ContainerDied","Data":"87f317a618386b949ca622d41a46e6b8feabcbcc205992cfe9c9eaf6d7d3ebde"} Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.244089 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-9nglq" event={"ID":"ad87e7e8-19c1-4c92-9400-9873a85e80b4","Type":"ContainerDied","Data":"ba25363c2d79aba62c98e32ef1cfe3eab986f26922f39054acafbb8fa7cbe6f4"} Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.244276 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-9nglq" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.309822 5099 scope.go:117] "RemoveContainer" containerID="9e47157eef931cc988aef84c69b28d3a4b69adda50a02b81dfdcd7f8fd726ee4" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.340084 5099 scope.go:117] "RemoveContainer" containerID="3d4d20ace4b753579568eceacae90a039b99e88e71a31847cf092dcc0f046f93" Jan 22 14:19:18 crc kubenswrapper[5099]: E0122 14:19:18.340668 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d4d20ace4b753579568eceacae90a039b99e88e71a31847cf092dcc0f046f93\": container with ID starting with 3d4d20ace4b753579568eceacae90a039b99e88e71a31847cf092dcc0f046f93 not found: ID does not exist" containerID="3d4d20ace4b753579568eceacae90a039b99e88e71a31847cf092dcc0f046f93" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.340704 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d4d20ace4b753579568eceacae90a039b99e88e71a31847cf092dcc0f046f93"} err="failed to get container status \"3d4d20ace4b753579568eceacae90a039b99e88e71a31847cf092dcc0f046f93\": rpc error: code = NotFound desc = could not find container \"3d4d20ace4b753579568eceacae90a039b99e88e71a31847cf092dcc0f046f93\": container with ID starting with 3d4d20ace4b753579568eceacae90a039b99e88e71a31847cf092dcc0f046f93 not found: ID does not exist" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.340729 5099 scope.go:117] "RemoveContainer" containerID="e6653d3f379301bd29eb0ccbcfa53e558de537ba1698457cf43f4ecec43f78cc" Jan 22 14:19:18 crc kubenswrapper[5099]: E0122 14:19:18.341098 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6653d3f379301bd29eb0ccbcfa53e558de537ba1698457cf43f4ecec43f78cc\": container with ID starting with e6653d3f379301bd29eb0ccbcfa53e558de537ba1698457cf43f4ecec43f78cc not found: ID does not exist" containerID="e6653d3f379301bd29eb0ccbcfa53e558de537ba1698457cf43f4ecec43f78cc" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.341127 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6653d3f379301bd29eb0ccbcfa53e558de537ba1698457cf43f4ecec43f78cc"} err="failed to get container status \"e6653d3f379301bd29eb0ccbcfa53e558de537ba1698457cf43f4ecec43f78cc\": rpc error: code = NotFound desc = could not find container \"e6653d3f379301bd29eb0ccbcfa53e558de537ba1698457cf43f4ecec43f78cc\": container with ID starting with e6653d3f379301bd29eb0ccbcfa53e558de537ba1698457cf43f4ecec43f78cc not found: ID does not exist" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.341144 5099 scope.go:117] "RemoveContainer" containerID="9e47157eef931cc988aef84c69b28d3a4b69adda50a02b81dfdcd7f8fd726ee4" Jan 22 14:19:18 crc kubenswrapper[5099]: E0122 14:19:18.341340 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e47157eef931cc988aef84c69b28d3a4b69adda50a02b81dfdcd7f8fd726ee4\": container with ID starting with 9e47157eef931cc988aef84c69b28d3a4b69adda50a02b81dfdcd7f8fd726ee4 not found: ID does not exist" containerID="9e47157eef931cc988aef84c69b28d3a4b69adda50a02b81dfdcd7f8fd726ee4" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.341364 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e47157eef931cc988aef84c69b28d3a4b69adda50a02b81dfdcd7f8fd726ee4"} err="failed to get container status \"9e47157eef931cc988aef84c69b28d3a4b69adda50a02b81dfdcd7f8fd726ee4\": rpc error: code = NotFound desc = could not find container \"9e47157eef931cc988aef84c69b28d3a4b69adda50a02b81dfdcd7f8fd726ee4\": container with ID starting with 9e47157eef931cc988aef84c69b28d3a4b69adda50a02b81dfdcd7f8fd726ee4 not found: ID does not exist" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.341380 5099 scope.go:117] "RemoveContainer" containerID="62533b9c54a55bfe9392ffcee50f3c4d61f22d499560e1d25dab231a56766b1f" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.348991 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5-catalog-content\") pod \"fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5\" (UID: \"fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5\") " Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.349282 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3266538-9050-43ad-a3d6-7428f83aa788-catalog-content\") pod \"b3266538-9050-43ad-a3d6-7428f83aa788\" (UID: \"b3266538-9050-43ad-a3d6-7428f83aa788\") " Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.349426 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5-utilities\") pod \"fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5\" (UID: \"fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5\") " Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.350670 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5-utilities" (OuterVolumeSpecName: "utilities") pod "fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5" (UID: "fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.360724 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3266538-9050-43ad-a3d6-7428f83aa788-utilities\") pod \"b3266538-9050-43ad-a3d6-7428f83aa788\" (UID: \"b3266538-9050-43ad-a3d6-7428f83aa788\") " Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.360892 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6vb5\" (UniqueName: \"kubernetes.io/projected/fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5-kube-api-access-g6vb5\") pod \"fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5\" (UID: \"fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5\") " Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.360925 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9tcs\" (UniqueName: \"kubernetes.io/projected/b3266538-9050-43ad-a3d6-7428f83aa788-kube-api-access-p9tcs\") pod \"b3266538-9050-43ad-a3d6-7428f83aa788\" (UID: \"b3266538-9050-43ad-a3d6-7428f83aa788\") " Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.362720 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3266538-9050-43ad-a3d6-7428f83aa788-utilities" (OuterVolumeSpecName: "utilities") pod "b3266538-9050-43ad-a3d6-7428f83aa788" (UID: "b3266538-9050-43ad-a3d6-7428f83aa788"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.363718 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.363739 5099 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3266538-9050-43ad-a3d6-7428f83aa788-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.365376 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-79vth"] Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.368403 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-79vth"] Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.380836 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7vcn"] Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.388601 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7vcn"] Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.393298 5099 scope.go:117] "RemoveContainer" containerID="56890c8575c7e18f97b76157cd0321e71a76208b3fe3c4ac87953525fb9d74bd" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.397834 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-9nglq"] Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.399733 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5-kube-api-access-g6vb5" (OuterVolumeSpecName: "kube-api-access-g6vb5") pod "fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5" (UID: "fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5"). InnerVolumeSpecName "kube-api-access-g6vb5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.399805 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3266538-9050-43ad-a3d6-7428f83aa788-kube-api-access-p9tcs" (OuterVolumeSpecName: "kube-api-access-p9tcs") pod "b3266538-9050-43ad-a3d6-7428f83aa788" (UID: "b3266538-9050-43ad-a3d6-7428f83aa788"). InnerVolumeSpecName "kube-api-access-p9tcs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.401811 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-9nglq"] Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.403288 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3266538-9050-43ad-a3d6-7428f83aa788-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b3266538-9050-43ad-a3d6-7428f83aa788" (UID: "b3266538-9050-43ad-a3d6-7428f83aa788"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.416248 5099 scope.go:117] "RemoveContainer" containerID="95cbbc688bc860d2fc52566b28db209114d0f7a0bb2c36bf7703acda17639d55" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.421496 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5" (UID: "fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.435118 5099 scope.go:117] "RemoveContainer" containerID="7a5610d6ce5903652f450a646ba9a4e976a3a9fbd9cde3f249e070dc9b370ca8" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.464578 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g6vb5\" (UniqueName: \"kubernetes.io/projected/fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5-kube-api-access-g6vb5\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.464617 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p9tcs\" (UniqueName: \"kubernetes.io/projected/b3266538-9050-43ad-a3d6-7428f83aa788-kube-api-access-p9tcs\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.464630 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.464641 5099 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3266538-9050-43ad-a3d6-7428f83aa788-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.470679 5099 scope.go:117] "RemoveContainer" containerID="096fd23928ef15c4eea4dee2724aefcc2a297e9a3d56e1d546c15e23388d92b9" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.511173 5099 scope.go:117] "RemoveContainer" containerID="dc160a6faa2b0b898a91efd66fb06a820808e87e37a7deb472be2abf2dbfedd2" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.526638 5099 scope.go:117] "RemoveContainer" containerID="7a5610d6ce5903652f450a646ba9a4e976a3a9fbd9cde3f249e070dc9b370ca8" Jan 22 14:19:18 crc kubenswrapper[5099]: E0122 14:19:18.527156 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a5610d6ce5903652f450a646ba9a4e976a3a9fbd9cde3f249e070dc9b370ca8\": container with ID starting with 7a5610d6ce5903652f450a646ba9a4e976a3a9fbd9cde3f249e070dc9b370ca8 not found: ID does not exist" containerID="7a5610d6ce5903652f450a646ba9a4e976a3a9fbd9cde3f249e070dc9b370ca8" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.527266 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a5610d6ce5903652f450a646ba9a4e976a3a9fbd9cde3f249e070dc9b370ca8"} err="failed to get container status \"7a5610d6ce5903652f450a646ba9a4e976a3a9fbd9cde3f249e070dc9b370ca8\": rpc error: code = NotFound desc = could not find container \"7a5610d6ce5903652f450a646ba9a4e976a3a9fbd9cde3f249e070dc9b370ca8\": container with ID starting with 7a5610d6ce5903652f450a646ba9a4e976a3a9fbd9cde3f249e070dc9b370ca8 not found: ID does not exist" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.527300 5099 scope.go:117] "RemoveContainer" containerID="096fd23928ef15c4eea4dee2724aefcc2a297e9a3d56e1d546c15e23388d92b9" Jan 22 14:19:18 crc kubenswrapper[5099]: E0122 14:19:18.527735 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"096fd23928ef15c4eea4dee2724aefcc2a297e9a3d56e1d546c15e23388d92b9\": container with ID starting with 096fd23928ef15c4eea4dee2724aefcc2a297e9a3d56e1d546c15e23388d92b9 not found: ID does not exist" containerID="096fd23928ef15c4eea4dee2724aefcc2a297e9a3d56e1d546c15e23388d92b9" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.527765 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"096fd23928ef15c4eea4dee2724aefcc2a297e9a3d56e1d546c15e23388d92b9"} err="failed to get container status \"096fd23928ef15c4eea4dee2724aefcc2a297e9a3d56e1d546c15e23388d92b9\": rpc error: code = NotFound desc = could not find container \"096fd23928ef15c4eea4dee2724aefcc2a297e9a3d56e1d546c15e23388d92b9\": container with ID starting with 096fd23928ef15c4eea4dee2724aefcc2a297e9a3d56e1d546c15e23388d92b9 not found: ID does not exist" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.527784 5099 scope.go:117] "RemoveContainer" containerID="dc160a6faa2b0b898a91efd66fb06a820808e87e37a7deb472be2abf2dbfedd2" Jan 22 14:19:18 crc kubenswrapper[5099]: E0122 14:19:18.528108 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc160a6faa2b0b898a91efd66fb06a820808e87e37a7deb472be2abf2dbfedd2\": container with ID starting with dc160a6faa2b0b898a91efd66fb06a820808e87e37a7deb472be2abf2dbfedd2 not found: ID does not exist" containerID="dc160a6faa2b0b898a91efd66fb06a820808e87e37a7deb472be2abf2dbfedd2" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.528182 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc160a6faa2b0b898a91efd66fb06a820808e87e37a7deb472be2abf2dbfedd2"} err="failed to get container status \"dc160a6faa2b0b898a91efd66fb06a820808e87e37a7deb472be2abf2dbfedd2\": rpc error: code = NotFound desc = could not find container \"dc160a6faa2b0b898a91efd66fb06a820808e87e37a7deb472be2abf2dbfedd2\": container with ID starting with dc160a6faa2b0b898a91efd66fb06a820808e87e37a7deb472be2abf2dbfedd2 not found: ID does not exist" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.528223 5099 scope.go:117] "RemoveContainer" containerID="87f317a618386b949ca622d41a46e6b8feabcbcc205992cfe9c9eaf6d7d3ebde" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.542038 5099 scope.go:117] "RemoveContainer" containerID="e0d647cdd912e67a0310f0285415a7dc9ed9f6f36a2846b251cb651ea8016a0c" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.551243 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jcl5d"] Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.556140 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jcl5d"] Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.562105 5099 scope.go:117] "RemoveContainer" containerID="87f317a618386b949ca622d41a46e6b8feabcbcc205992cfe9c9eaf6d7d3ebde" Jan 22 14:19:18 crc kubenswrapper[5099]: E0122 14:19:18.562446 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87f317a618386b949ca622d41a46e6b8feabcbcc205992cfe9c9eaf6d7d3ebde\": container with ID starting with 87f317a618386b949ca622d41a46e6b8feabcbcc205992cfe9c9eaf6d7d3ebde not found: ID does not exist" containerID="87f317a618386b949ca622d41a46e6b8feabcbcc205992cfe9c9eaf6d7d3ebde" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.562485 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87f317a618386b949ca622d41a46e6b8feabcbcc205992cfe9c9eaf6d7d3ebde"} err="failed to get container status \"87f317a618386b949ca622d41a46e6b8feabcbcc205992cfe9c9eaf6d7d3ebde\": rpc error: code = NotFound desc = could not find container \"87f317a618386b949ca622d41a46e6b8feabcbcc205992cfe9c9eaf6d7d3ebde\": container with ID starting with 87f317a618386b949ca622d41a46e6b8feabcbcc205992cfe9c9eaf6d7d3ebde not found: ID does not exist" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.562508 5099 scope.go:117] "RemoveContainer" containerID="e0d647cdd912e67a0310f0285415a7dc9ed9f6f36a2846b251cb651ea8016a0c" Jan 22 14:19:18 crc kubenswrapper[5099]: E0122 14:19:18.562758 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0d647cdd912e67a0310f0285415a7dc9ed9f6f36a2846b251cb651ea8016a0c\": container with ID starting with e0d647cdd912e67a0310f0285415a7dc9ed9f6f36a2846b251cb651ea8016a0c not found: ID does not exist" containerID="e0d647cdd912e67a0310f0285415a7dc9ed9f6f36a2846b251cb651ea8016a0c" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.562786 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0d647cdd912e67a0310f0285415a7dc9ed9f6f36a2846b251cb651ea8016a0c"} err="failed to get container status \"e0d647cdd912e67a0310f0285415a7dc9ed9f6f36a2846b251cb651ea8016a0c\": rpc error: code = NotFound desc = could not find container \"e0d647cdd912e67a0310f0285415a7dc9ed9f6f36a2846b251cb651ea8016a0c\": container with ID starting with e0d647cdd912e67a0310f0285415a7dc9ed9f6f36a2846b251cb651ea8016a0c not found: ID does not exist" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.769033 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe" path="/var/lib/kubelet/pods/2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe/volumes" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.769660 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83ff52d5-f127-494f-b2bd-e9a98e556392" path="/var/lib/kubelet/pods/83ff52d5-f127-494f-b2bd-e9a98e556392/volumes" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.770291 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad87e7e8-19c1-4c92-9400-9873a85e80b4" path="/var/lib/kubelet/pods/ad87e7e8-19c1-4c92-9400-9873a85e80b4/volumes" Jan 22 14:19:18 crc kubenswrapper[5099]: I0122 14:19:18.770730 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5" path="/var/lib/kubelet/pods/fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5/volumes" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.142265 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-549b6648c8-gf2nm"] Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.142530 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-549b6648c8-gf2nm" podUID="bab26e39-d459-41cb-b009-09a804261374" containerName="controller-manager" containerID="cri-o://3587b9126625dc1b0ae24445bde61fe29fb9c2dea98872e891fcb80f0f735457" gracePeriod=30 Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.253199 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gj5rr" event={"ID":"b3266538-9050-43ad-a3d6-7428f83aa788","Type":"ContainerDied","Data":"bf97dcb27cac54113547a234b424212bb34d0766b85b765c26bfec6e9ca9d1ad"} Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.253252 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gj5rr" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.253258 5099 scope.go:117] "RemoveContainer" containerID="e2e79a21b653fa291a83ffbd0afbf6aef0c182483e0edc63548d01f498d0318b" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.265608 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-j57c2" event={"ID":"72bbdeee-518a-4576-b7a2-5e89e0ae701f","Type":"ContainerStarted","Data":"1e0be0439506684c74ed5ac1db898c69bc98cd77053b5be9a151240e1892929f"} Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.265992 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-j57c2" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.270151 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-j57c2" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.280916 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-j57c2" podStartSLOduration=2.280895246 podStartE2EDuration="2.280895246s" podCreationTimestamp="2026-01-22 14:19:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:19:19.279652714 +0000 UTC m=+316.987402961" watchObservedRunningTime="2026-01-22 14:19:19.280895246 +0000 UTC m=+316.988645483" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.336201 5099 scope.go:117] "RemoveContainer" containerID="31c12304712834620597add05cc9d17c1d39d5079b9a1fe7dd3e839fa1f36a6a" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.336543 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gj5rr"] Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.342579 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gj5rr"] Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.362374 5099 scope.go:117] "RemoveContainer" containerID="67d19507ea1b84109f4145080b188fa110af00707d9f4e67b95eecd65ca28133" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.945649 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4t4cn"] Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.946993 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3266538-9050-43ad-a3d6-7428f83aa788" containerName="extract-content" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.947124 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3266538-9050-43ad-a3d6-7428f83aa788" containerName="extract-content" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.947298 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5" containerName="extract-content" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.947383 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5" containerName="extract-content" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.947461 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="83ff52d5-f127-494f-b2bd-e9a98e556392" containerName="extract-utilities" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.947544 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ff52d5-f127-494f-b2bd-e9a98e556392" containerName="extract-utilities" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.947619 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe" containerName="extract-utilities" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.947703 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe" containerName="extract-utilities" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.947781 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ad87e7e8-19c1-4c92-9400-9873a85e80b4" containerName="marketplace-operator" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.947850 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad87e7e8-19c1-4c92-9400-9873a85e80b4" containerName="marketplace-operator" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.947926 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5" containerName="registry-server" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.947996 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5" containerName="registry-server" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.948078 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="83ff52d5-f127-494f-b2bd-e9a98e556392" containerName="extract-content" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.948135 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ff52d5-f127-494f-b2bd-e9a98e556392" containerName="extract-content" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.948205 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="83ff52d5-f127-494f-b2bd-e9a98e556392" containerName="registry-server" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.948268 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ff52d5-f127-494f-b2bd-e9a98e556392" containerName="registry-server" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.948321 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3266538-9050-43ad-a3d6-7428f83aa788" containerName="extract-utilities" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.948373 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3266538-9050-43ad-a3d6-7428f83aa788" containerName="extract-utilities" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.948432 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe" containerName="extract-content" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.948491 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe" containerName="extract-content" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.948568 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe" containerName="registry-server" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.948645 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe" containerName="registry-server" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.948722 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5" containerName="extract-utilities" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.948787 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5" containerName="extract-utilities" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.948852 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3266538-9050-43ad-a3d6-7428f83aa788" containerName="registry-server" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.948916 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3266538-9050-43ad-a3d6-7428f83aa788" containerName="registry-server" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.948997 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ad87e7e8-19c1-4c92-9400-9873a85e80b4" containerName="marketplace-operator" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.949073 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad87e7e8-19c1-4c92-9400-9873a85e80b4" containerName="marketplace-operator" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.949298 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="ad87e7e8-19c1-4c92-9400-9873a85e80b4" containerName="marketplace-operator" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.949395 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="83ff52d5-f127-494f-b2bd-e9a98e556392" containerName="registry-server" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.949468 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="fdfe94d1-7830-4d2e-a6c1-bbb9d904bed5" containerName="registry-server" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.949547 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="2d375ce4-cfbc-4019-a2a4-3f31c8bd10fe" containerName="registry-server" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.949639 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="b3266538-9050-43ad-a3d6-7428f83aa788" containerName="registry-server" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.949996 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="ad87e7e8-19c1-4c92-9400-9873a85e80b4" containerName="marketplace-operator" Jan 22 14:19:19 crc kubenswrapper[5099]: I0122 14:19:19.976019 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-549b6648c8-gf2nm" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.024530 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4t4cn"] Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.025081 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-549c648458-qk9rh"] Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.024855 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4t4cn" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.026503 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bab26e39-d459-41cb-b009-09a804261374" containerName="controller-manager" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.026600 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="bab26e39-d459-41cb-b009-09a804261374" containerName="controller-manager" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.027020 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="bab26e39-d459-41cb-b009-09a804261374" containerName="controller-manager" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.029419 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.037663 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-549c648458-qk9rh"] Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.037820 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-549c648458-qk9rh" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.096435 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bab26e39-d459-41cb-b009-09a804261374-proxy-ca-bundles\") pod \"bab26e39-d459-41cb-b009-09a804261374\" (UID: \"bab26e39-d459-41cb-b009-09a804261374\") " Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.096490 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrvzw\" (UniqueName: \"kubernetes.io/projected/bab26e39-d459-41cb-b009-09a804261374-kube-api-access-nrvzw\") pod \"bab26e39-d459-41cb-b009-09a804261374\" (UID: \"bab26e39-d459-41cb-b009-09a804261374\") " Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.096552 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bab26e39-d459-41cb-b009-09a804261374-serving-cert\") pod \"bab26e39-d459-41cb-b009-09a804261374\" (UID: \"bab26e39-d459-41cb-b009-09a804261374\") " Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.096583 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bab26e39-d459-41cb-b009-09a804261374-client-ca\") pod \"bab26e39-d459-41cb-b009-09a804261374\" (UID: \"bab26e39-d459-41cb-b009-09a804261374\") " Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.096618 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bab26e39-d459-41cb-b009-09a804261374-config\") pod \"bab26e39-d459-41cb-b009-09a804261374\" (UID: \"bab26e39-d459-41cb-b009-09a804261374\") " Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.096656 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bab26e39-d459-41cb-b009-09a804261374-tmp\") pod \"bab26e39-d459-41cb-b009-09a804261374\" (UID: \"bab26e39-d459-41cb-b009-09a804261374\") " Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.097056 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bab26e39-d459-41cb-b009-09a804261374-tmp" (OuterVolumeSpecName: "tmp") pod "bab26e39-d459-41cb-b009-09a804261374" (UID: "bab26e39-d459-41cb-b009-09a804261374"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.097498 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bab26e39-d459-41cb-b009-09a804261374-client-ca" (OuterVolumeSpecName: "client-ca") pod "bab26e39-d459-41cb-b009-09a804261374" (UID: "bab26e39-d459-41cb-b009-09a804261374"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.097814 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bab26e39-d459-41cb-b009-09a804261374-config" (OuterVolumeSpecName: "config") pod "bab26e39-d459-41cb-b009-09a804261374" (UID: "bab26e39-d459-41cb-b009-09a804261374"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.097997 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bab26e39-d459-41cb-b009-09a804261374-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "bab26e39-d459-41cb-b009-09a804261374" (UID: "bab26e39-d459-41cb-b009-09a804261374"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.103807 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bab26e39-d459-41cb-b009-09a804261374-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bab26e39-d459-41cb-b009-09a804261374" (UID: "bab26e39-d459-41cb-b009-09a804261374"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.103828 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bab26e39-d459-41cb-b009-09a804261374-kube-api-access-nrvzw" (OuterVolumeSpecName: "kube-api-access-nrvzw") pod "bab26e39-d459-41cb-b009-09a804261374" (UID: "bab26e39-d459-41cb-b009-09a804261374"). InnerVolumeSpecName "kube-api-access-nrvzw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.198198 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d387ff84-2c29-4ca7-93a2-aa2eab9fd253-tmp\") pod \"controller-manager-549c648458-qk9rh\" (UID: \"d387ff84-2c29-4ca7-93a2-aa2eab9fd253\") " pod="openshift-controller-manager/controller-manager-549c648458-qk9rh" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.198303 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d387ff84-2c29-4ca7-93a2-aa2eab9fd253-config\") pod \"controller-manager-549c648458-qk9rh\" (UID: \"d387ff84-2c29-4ca7-93a2-aa2eab9fd253\") " pod="openshift-controller-manager/controller-manager-549c648458-qk9rh" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.198332 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d387ff84-2c29-4ca7-93a2-aa2eab9fd253-proxy-ca-bundles\") pod \"controller-manager-549c648458-qk9rh\" (UID: \"d387ff84-2c29-4ca7-93a2-aa2eab9fd253\") " pod="openshift-controller-manager/controller-manager-549c648458-qk9rh" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.198402 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n24nk\" (UniqueName: \"kubernetes.io/projected/d387ff84-2c29-4ca7-93a2-aa2eab9fd253-kube-api-access-n24nk\") pod \"controller-manager-549c648458-qk9rh\" (UID: \"d387ff84-2c29-4ca7-93a2-aa2eab9fd253\") " pod="openshift-controller-manager/controller-manager-549c648458-qk9rh" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.198462 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d387ff84-2c29-4ca7-93a2-aa2eab9fd253-serving-cert\") pod \"controller-manager-549c648458-qk9rh\" (UID: \"d387ff84-2c29-4ca7-93a2-aa2eab9fd253\") " pod="openshift-controller-manager/controller-manager-549c648458-qk9rh" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.198531 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d387ff84-2c29-4ca7-93a2-aa2eab9fd253-client-ca\") pod \"controller-manager-549c648458-qk9rh\" (UID: \"d387ff84-2c29-4ca7-93a2-aa2eab9fd253\") " pod="openshift-controller-manager/controller-manager-549c648458-qk9rh" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.198556 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fklr8\" (UniqueName: \"kubernetes.io/projected/d4d05a0d-9625-494b-a0a2-9ebd06498c18-kube-api-access-fklr8\") pod \"redhat-marketplace-4t4cn\" (UID: \"d4d05a0d-9625-494b-a0a2-9ebd06498c18\") " pod="openshift-marketplace/redhat-marketplace-4t4cn" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.198616 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4d05a0d-9625-494b-a0a2-9ebd06498c18-utilities\") pod \"redhat-marketplace-4t4cn\" (UID: \"d4d05a0d-9625-494b-a0a2-9ebd06498c18\") " pod="openshift-marketplace/redhat-marketplace-4t4cn" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.198643 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4d05a0d-9625-494b-a0a2-9ebd06498c18-catalog-content\") pod \"redhat-marketplace-4t4cn\" (UID: \"d4d05a0d-9625-494b-a0a2-9ebd06498c18\") " pod="openshift-marketplace/redhat-marketplace-4t4cn" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.198714 5099 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bab26e39-d459-41cb-b009-09a804261374-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.198757 5099 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bab26e39-d459-41cb-b009-09a804261374-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.198767 5099 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bab26e39-d459-41cb-b009-09a804261374-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.198777 5099 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bab26e39-d459-41cb-b009-09a804261374-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.198790 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nrvzw\" (UniqueName: \"kubernetes.io/projected/bab26e39-d459-41cb-b009-09a804261374-kube-api-access-nrvzw\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.198799 5099 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bab26e39-d459-41cb-b009-09a804261374-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.284186 5099 generic.go:358] "Generic (PLEG): container finished" podID="bab26e39-d459-41cb-b009-09a804261374" containerID="3587b9126625dc1b0ae24445bde61fe29fb9c2dea98872e891fcb80f0f735457" exitCode=0 Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.284316 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-549b6648c8-gf2nm" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.284334 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-549b6648c8-gf2nm" event={"ID":"bab26e39-d459-41cb-b009-09a804261374","Type":"ContainerDied","Data":"3587b9126625dc1b0ae24445bde61fe29fb9c2dea98872e891fcb80f0f735457"} Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.285622 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-549b6648c8-gf2nm" event={"ID":"bab26e39-d459-41cb-b009-09a804261374","Type":"ContainerDied","Data":"0ade783951cc8b5e9942a3a8ff1dd27375a729d825d5446cae3876cdb9e563dd"} Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.285741 5099 scope.go:117] "RemoveContainer" containerID="3587b9126625dc1b0ae24445bde61fe29fb9c2dea98872e891fcb80f0f735457" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.299631 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fklr8\" (UniqueName: \"kubernetes.io/projected/d4d05a0d-9625-494b-a0a2-9ebd06498c18-kube-api-access-fklr8\") pod \"redhat-marketplace-4t4cn\" (UID: \"d4d05a0d-9625-494b-a0a2-9ebd06498c18\") " pod="openshift-marketplace/redhat-marketplace-4t4cn" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.299682 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4d05a0d-9625-494b-a0a2-9ebd06498c18-utilities\") pod \"redhat-marketplace-4t4cn\" (UID: \"d4d05a0d-9625-494b-a0a2-9ebd06498c18\") " pod="openshift-marketplace/redhat-marketplace-4t4cn" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.299712 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4d05a0d-9625-494b-a0a2-9ebd06498c18-catalog-content\") pod \"redhat-marketplace-4t4cn\" (UID: \"d4d05a0d-9625-494b-a0a2-9ebd06498c18\") " pod="openshift-marketplace/redhat-marketplace-4t4cn" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.299759 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d387ff84-2c29-4ca7-93a2-aa2eab9fd253-tmp\") pod \"controller-manager-549c648458-qk9rh\" (UID: \"d387ff84-2c29-4ca7-93a2-aa2eab9fd253\") " pod="openshift-controller-manager/controller-manager-549c648458-qk9rh" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.299796 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d387ff84-2c29-4ca7-93a2-aa2eab9fd253-config\") pod \"controller-manager-549c648458-qk9rh\" (UID: \"d387ff84-2c29-4ca7-93a2-aa2eab9fd253\") " pod="openshift-controller-manager/controller-manager-549c648458-qk9rh" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.299823 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d387ff84-2c29-4ca7-93a2-aa2eab9fd253-proxy-ca-bundles\") pod \"controller-manager-549c648458-qk9rh\" (UID: \"d387ff84-2c29-4ca7-93a2-aa2eab9fd253\") " pod="openshift-controller-manager/controller-manager-549c648458-qk9rh" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.299844 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n24nk\" (UniqueName: \"kubernetes.io/projected/d387ff84-2c29-4ca7-93a2-aa2eab9fd253-kube-api-access-n24nk\") pod \"controller-manager-549c648458-qk9rh\" (UID: \"d387ff84-2c29-4ca7-93a2-aa2eab9fd253\") " pod="openshift-controller-manager/controller-manager-549c648458-qk9rh" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.299886 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d387ff84-2c29-4ca7-93a2-aa2eab9fd253-serving-cert\") pod \"controller-manager-549c648458-qk9rh\" (UID: \"d387ff84-2c29-4ca7-93a2-aa2eab9fd253\") " pod="openshift-controller-manager/controller-manager-549c648458-qk9rh" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.299931 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d387ff84-2c29-4ca7-93a2-aa2eab9fd253-client-ca\") pod \"controller-manager-549c648458-qk9rh\" (UID: \"d387ff84-2c29-4ca7-93a2-aa2eab9fd253\") " pod="openshift-controller-manager/controller-manager-549c648458-qk9rh" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.300785 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d387ff84-2c29-4ca7-93a2-aa2eab9fd253-tmp\") pod \"controller-manager-549c648458-qk9rh\" (UID: \"d387ff84-2c29-4ca7-93a2-aa2eab9fd253\") " pod="openshift-controller-manager/controller-manager-549c648458-qk9rh" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.300785 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4d05a0d-9625-494b-a0a2-9ebd06498c18-utilities\") pod \"redhat-marketplace-4t4cn\" (UID: \"d4d05a0d-9625-494b-a0a2-9ebd06498c18\") " pod="openshift-marketplace/redhat-marketplace-4t4cn" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.300850 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4d05a0d-9625-494b-a0a2-9ebd06498c18-catalog-content\") pod \"redhat-marketplace-4t4cn\" (UID: \"d4d05a0d-9625-494b-a0a2-9ebd06498c18\") " pod="openshift-marketplace/redhat-marketplace-4t4cn" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.301706 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d387ff84-2c29-4ca7-93a2-aa2eab9fd253-client-ca\") pod \"controller-manager-549c648458-qk9rh\" (UID: \"d387ff84-2c29-4ca7-93a2-aa2eab9fd253\") " pod="openshift-controller-manager/controller-manager-549c648458-qk9rh" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.302194 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d387ff84-2c29-4ca7-93a2-aa2eab9fd253-proxy-ca-bundles\") pod \"controller-manager-549c648458-qk9rh\" (UID: \"d387ff84-2c29-4ca7-93a2-aa2eab9fd253\") " pod="openshift-controller-manager/controller-manager-549c648458-qk9rh" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.303058 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d387ff84-2c29-4ca7-93a2-aa2eab9fd253-config\") pod \"controller-manager-549c648458-qk9rh\" (UID: \"d387ff84-2c29-4ca7-93a2-aa2eab9fd253\") " pod="openshift-controller-manager/controller-manager-549c648458-qk9rh" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.308081 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d387ff84-2c29-4ca7-93a2-aa2eab9fd253-serving-cert\") pod \"controller-manager-549c648458-qk9rh\" (UID: \"d387ff84-2c29-4ca7-93a2-aa2eab9fd253\") " pod="openshift-controller-manager/controller-manager-549c648458-qk9rh" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.316923 5099 scope.go:117] "RemoveContainer" containerID="3587b9126625dc1b0ae24445bde61fe29fb9c2dea98872e891fcb80f0f735457" Jan 22 14:19:20 crc kubenswrapper[5099]: E0122 14:19:20.317445 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3587b9126625dc1b0ae24445bde61fe29fb9c2dea98872e891fcb80f0f735457\": container with ID starting with 3587b9126625dc1b0ae24445bde61fe29fb9c2dea98872e891fcb80f0f735457 not found: ID does not exist" containerID="3587b9126625dc1b0ae24445bde61fe29fb9c2dea98872e891fcb80f0f735457" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.317516 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3587b9126625dc1b0ae24445bde61fe29fb9c2dea98872e891fcb80f0f735457"} err="failed to get container status \"3587b9126625dc1b0ae24445bde61fe29fb9c2dea98872e891fcb80f0f735457\": rpc error: code = NotFound desc = could not find container \"3587b9126625dc1b0ae24445bde61fe29fb9c2dea98872e891fcb80f0f735457\": container with ID starting with 3587b9126625dc1b0ae24445bde61fe29fb9c2dea98872e891fcb80f0f735457 not found: ID does not exist" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.322476 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n24nk\" (UniqueName: \"kubernetes.io/projected/d387ff84-2c29-4ca7-93a2-aa2eab9fd253-kube-api-access-n24nk\") pod \"controller-manager-549c648458-qk9rh\" (UID: \"d387ff84-2c29-4ca7-93a2-aa2eab9fd253\") " pod="openshift-controller-manager/controller-manager-549c648458-qk9rh" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.322539 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fklr8\" (UniqueName: \"kubernetes.io/projected/d4d05a0d-9625-494b-a0a2-9ebd06498c18-kube-api-access-fklr8\") pod \"redhat-marketplace-4t4cn\" (UID: \"d4d05a0d-9625-494b-a0a2-9ebd06498c18\") " pod="openshift-marketplace/redhat-marketplace-4t4cn" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.323397 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-549b6648c8-gf2nm"] Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.327524 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-549b6648c8-gf2nm"] Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.346139 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4t4cn" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.360569 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-549c648458-qk9rh" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.542002 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4t4cn"] Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.573226 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-549c648458-qk9rh"] Jan 22 14:19:20 crc kubenswrapper[5099]: W0122 14:19:20.578489 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd387ff84_2c29_4ca7_93a2_aa2eab9fd253.slice/crio-bb6934e3906d2493f0443c3f5080a225b27aa3048378d47d8e617921997d76b7 WatchSource:0}: Error finding container bb6934e3906d2493f0443c3f5080a225b27aa3048378d47d8e617921997d76b7: Status 404 returned error can't find the container with id bb6934e3906d2493f0443c3f5080a225b27aa3048378d47d8e617921997d76b7 Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.769541 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3266538-9050-43ad-a3d6-7428f83aa788" path="/var/lib/kubelet/pods/b3266538-9050-43ad-a3d6-7428f83aa788/volumes" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.770214 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bab26e39-d459-41cb-b009-09a804261374" path="/var/lib/kubelet/pods/bab26e39-d459-41cb-b009-09a804261374/volumes" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.949354 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pw4sz"] Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.967424 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pw4sz"] Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.967643 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pw4sz" Jan 22 14:19:20 crc kubenswrapper[5099]: I0122 14:19:20.980000 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 22 14:19:21 crc kubenswrapper[5099]: I0122 14:19:21.110353 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm22n\" (UniqueName: \"kubernetes.io/projected/f4a52a56-c643-431a-b270-c92429b4e328-kube-api-access-gm22n\") pod \"redhat-operators-pw4sz\" (UID: \"f4a52a56-c643-431a-b270-c92429b4e328\") " pod="openshift-marketplace/redhat-operators-pw4sz" Jan 22 14:19:21 crc kubenswrapper[5099]: I0122 14:19:21.110818 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4a52a56-c643-431a-b270-c92429b4e328-catalog-content\") pod \"redhat-operators-pw4sz\" (UID: \"f4a52a56-c643-431a-b270-c92429b4e328\") " pod="openshift-marketplace/redhat-operators-pw4sz" Jan 22 14:19:21 crc kubenswrapper[5099]: I0122 14:19:21.110940 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4a52a56-c643-431a-b270-c92429b4e328-utilities\") pod \"redhat-operators-pw4sz\" (UID: \"f4a52a56-c643-431a-b270-c92429b4e328\") " pod="openshift-marketplace/redhat-operators-pw4sz" Jan 22 14:19:21 crc kubenswrapper[5099]: I0122 14:19:21.211892 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4a52a56-c643-431a-b270-c92429b4e328-catalog-content\") pod \"redhat-operators-pw4sz\" (UID: \"f4a52a56-c643-431a-b270-c92429b4e328\") " pod="openshift-marketplace/redhat-operators-pw4sz" Jan 22 14:19:21 crc kubenswrapper[5099]: I0122 14:19:21.211950 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4a52a56-c643-431a-b270-c92429b4e328-utilities\") pod \"redhat-operators-pw4sz\" (UID: \"f4a52a56-c643-431a-b270-c92429b4e328\") " pod="openshift-marketplace/redhat-operators-pw4sz" Jan 22 14:19:21 crc kubenswrapper[5099]: I0122 14:19:21.212093 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gm22n\" (UniqueName: \"kubernetes.io/projected/f4a52a56-c643-431a-b270-c92429b4e328-kube-api-access-gm22n\") pod \"redhat-operators-pw4sz\" (UID: \"f4a52a56-c643-431a-b270-c92429b4e328\") " pod="openshift-marketplace/redhat-operators-pw4sz" Jan 22 14:19:21 crc kubenswrapper[5099]: I0122 14:19:21.213078 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4a52a56-c643-431a-b270-c92429b4e328-utilities\") pod \"redhat-operators-pw4sz\" (UID: \"f4a52a56-c643-431a-b270-c92429b4e328\") " pod="openshift-marketplace/redhat-operators-pw4sz" Jan 22 14:19:21 crc kubenswrapper[5099]: I0122 14:19:21.213499 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4a52a56-c643-431a-b270-c92429b4e328-catalog-content\") pod \"redhat-operators-pw4sz\" (UID: \"f4a52a56-c643-431a-b270-c92429b4e328\") " pod="openshift-marketplace/redhat-operators-pw4sz" Jan 22 14:19:21 crc kubenswrapper[5099]: I0122 14:19:21.241771 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm22n\" (UniqueName: \"kubernetes.io/projected/f4a52a56-c643-431a-b270-c92429b4e328-kube-api-access-gm22n\") pod \"redhat-operators-pw4sz\" (UID: \"f4a52a56-c643-431a-b270-c92429b4e328\") " pod="openshift-marketplace/redhat-operators-pw4sz" Jan 22 14:19:21 crc kubenswrapper[5099]: I0122 14:19:21.294114 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pw4sz" Jan 22 14:19:21 crc kubenswrapper[5099]: I0122 14:19:21.305790 5099 generic.go:358] "Generic (PLEG): container finished" podID="d4d05a0d-9625-494b-a0a2-9ebd06498c18" containerID="9394ec15c07699173907724d71e2debb238f601bdd03a77a046303c3bf6cc44d" exitCode=0 Jan 22 14:19:21 crc kubenswrapper[5099]: I0122 14:19:21.305949 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4t4cn" event={"ID":"d4d05a0d-9625-494b-a0a2-9ebd06498c18","Type":"ContainerDied","Data":"9394ec15c07699173907724d71e2debb238f601bdd03a77a046303c3bf6cc44d"} Jan 22 14:19:21 crc kubenswrapper[5099]: I0122 14:19:21.305986 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4t4cn" event={"ID":"d4d05a0d-9625-494b-a0a2-9ebd06498c18","Type":"ContainerStarted","Data":"aeb2b488c84f9e18aaacb59e107dd32b673cd4e84a0baa3861391dfd7e385a64"} Jan 22 14:19:21 crc kubenswrapper[5099]: I0122 14:19:21.315090 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-549c648458-qk9rh" event={"ID":"d387ff84-2c29-4ca7-93a2-aa2eab9fd253","Type":"ContainerStarted","Data":"476a387bb9ecf6ad755be34b13393f471c83d6e0434da126b45fb0b6e4910f07"} Jan 22 14:19:21 crc kubenswrapper[5099]: I0122 14:19:21.315147 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-549c648458-qk9rh" Jan 22 14:19:21 crc kubenswrapper[5099]: I0122 14:19:21.315545 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-549c648458-qk9rh" event={"ID":"d387ff84-2c29-4ca7-93a2-aa2eab9fd253","Type":"ContainerStarted","Data":"bb6934e3906d2493f0443c3f5080a225b27aa3048378d47d8e617921997d76b7"} Jan 22 14:19:21 crc kubenswrapper[5099]: I0122 14:19:21.365502 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-549c648458-qk9rh" podStartSLOduration=2.365472297 podStartE2EDuration="2.365472297s" podCreationTimestamp="2026-01-22 14:19:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:19:21.360489457 +0000 UTC m=+319.068239704" watchObservedRunningTime="2026-01-22 14:19:21.365472297 +0000 UTC m=+319.073222534" Jan 22 14:19:21 crc kubenswrapper[5099]: I0122 14:19:21.738547 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pw4sz"] Jan 22 14:19:21 crc kubenswrapper[5099]: I0122 14:19:21.747675 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-549c648458-qk9rh" Jan 22 14:19:22 crc kubenswrapper[5099]: I0122 14:19:22.321473 5099 generic.go:358] "Generic (PLEG): container finished" podID="f4a52a56-c643-431a-b270-c92429b4e328" containerID="8fc6dd68fa1ca31d802d6429c99ba77406cf0997894700b1aa03ce500408d012" exitCode=0 Jan 22 14:19:22 crc kubenswrapper[5099]: I0122 14:19:22.321559 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pw4sz" event={"ID":"f4a52a56-c643-431a-b270-c92429b4e328","Type":"ContainerDied","Data":"8fc6dd68fa1ca31d802d6429c99ba77406cf0997894700b1aa03ce500408d012"} Jan 22 14:19:22 crc kubenswrapper[5099]: I0122 14:19:22.322399 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pw4sz" event={"ID":"f4a52a56-c643-431a-b270-c92429b4e328","Type":"ContainerStarted","Data":"6d9f111fd786259d245da1e49e3f641e9b685d3d75709ba68ae605b93858abbb"} Jan 22 14:19:22 crc kubenswrapper[5099]: I0122 14:19:22.324500 5099 generic.go:358] "Generic (PLEG): container finished" podID="d4d05a0d-9625-494b-a0a2-9ebd06498c18" containerID="2ef532635b40a1be58a59046848de114738d63b892905ad3f1b79ec9b4e02ae6" exitCode=0 Jan 22 14:19:22 crc kubenswrapper[5099]: I0122 14:19:22.325809 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4t4cn" event={"ID":"d4d05a0d-9625-494b-a0a2-9ebd06498c18","Type":"ContainerDied","Data":"2ef532635b40a1be58a59046848de114738d63b892905ad3f1b79ec9b4e02ae6"} Jan 22 14:19:22 crc kubenswrapper[5099]: I0122 14:19:22.361854 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-52bqc"] Jan 22 14:19:22 crc kubenswrapper[5099]: I0122 14:19:22.416543 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-52bqc"] Jan 22 14:19:22 crc kubenswrapper[5099]: I0122 14:19:22.416808 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52bqc" Jan 22 14:19:22 crc kubenswrapper[5099]: I0122 14:19:22.424329 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 22 14:19:22 crc kubenswrapper[5099]: I0122 14:19:22.430521 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25222624-29df-4640-a7d9-6840ef510d65-catalog-content\") pod \"community-operators-52bqc\" (UID: \"25222624-29df-4640-a7d9-6840ef510d65\") " pod="openshift-marketplace/community-operators-52bqc" Jan 22 14:19:22 crc kubenswrapper[5099]: I0122 14:19:22.430591 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25222624-29df-4640-a7d9-6840ef510d65-utilities\") pod \"community-operators-52bqc\" (UID: \"25222624-29df-4640-a7d9-6840ef510d65\") " pod="openshift-marketplace/community-operators-52bqc" Jan 22 14:19:22 crc kubenswrapper[5099]: I0122 14:19:22.430634 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lgfm\" (UniqueName: \"kubernetes.io/projected/25222624-29df-4640-a7d9-6840ef510d65-kube-api-access-2lgfm\") pod \"community-operators-52bqc\" (UID: \"25222624-29df-4640-a7d9-6840ef510d65\") " pod="openshift-marketplace/community-operators-52bqc" Jan 22 14:19:22 crc kubenswrapper[5099]: I0122 14:19:22.532041 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2lgfm\" (UniqueName: \"kubernetes.io/projected/25222624-29df-4640-a7d9-6840ef510d65-kube-api-access-2lgfm\") pod \"community-operators-52bqc\" (UID: \"25222624-29df-4640-a7d9-6840ef510d65\") " pod="openshift-marketplace/community-operators-52bqc" Jan 22 14:19:22 crc kubenswrapper[5099]: I0122 14:19:22.532146 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25222624-29df-4640-a7d9-6840ef510d65-catalog-content\") pod \"community-operators-52bqc\" (UID: \"25222624-29df-4640-a7d9-6840ef510d65\") " pod="openshift-marketplace/community-operators-52bqc" Jan 22 14:19:22 crc kubenswrapper[5099]: I0122 14:19:22.532212 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25222624-29df-4640-a7d9-6840ef510d65-utilities\") pod \"community-operators-52bqc\" (UID: \"25222624-29df-4640-a7d9-6840ef510d65\") " pod="openshift-marketplace/community-operators-52bqc" Jan 22 14:19:22 crc kubenswrapper[5099]: I0122 14:19:22.533839 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25222624-29df-4640-a7d9-6840ef510d65-utilities\") pod \"community-operators-52bqc\" (UID: \"25222624-29df-4640-a7d9-6840ef510d65\") " pod="openshift-marketplace/community-operators-52bqc" Jan 22 14:19:22 crc kubenswrapper[5099]: I0122 14:19:22.533847 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25222624-29df-4640-a7d9-6840ef510d65-catalog-content\") pod \"community-operators-52bqc\" (UID: \"25222624-29df-4640-a7d9-6840ef510d65\") " pod="openshift-marketplace/community-operators-52bqc" Jan 22 14:19:22 crc kubenswrapper[5099]: I0122 14:19:22.559463 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lgfm\" (UniqueName: \"kubernetes.io/projected/25222624-29df-4640-a7d9-6840ef510d65-kube-api-access-2lgfm\") pod \"community-operators-52bqc\" (UID: \"25222624-29df-4640-a7d9-6840ef510d65\") " pod="openshift-marketplace/community-operators-52bqc" Jan 22 14:19:22 crc kubenswrapper[5099]: I0122 14:19:22.742831 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52bqc" Jan 22 14:19:22 crc kubenswrapper[5099]: I0122 14:19:22.789779 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-h7bjs"] Jan 22 14:19:22 crc kubenswrapper[5099]: I0122 14:19:22.799271 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-h7bjs" Jan 22 14:19:22 crc kubenswrapper[5099]: I0122 14:19:22.803235 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-h7bjs"] Jan 22 14:19:22 crc kubenswrapper[5099]: I0122 14:19:22.940349 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/000e1067-5191-4860-98c5-54aa38e66e18-registry-certificates\") pod \"image-registry-5d9d95bf5b-h7bjs\" (UID: \"000e1067-5191-4860-98c5-54aa38e66e18\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h7bjs" Jan 22 14:19:22 crc kubenswrapper[5099]: I0122 14:19:22.940433 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/000e1067-5191-4860-98c5-54aa38e66e18-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-h7bjs\" (UID: \"000e1067-5191-4860-98c5-54aa38e66e18\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h7bjs" Jan 22 14:19:22 crc kubenswrapper[5099]: I0122 14:19:22.940483 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/000e1067-5191-4860-98c5-54aa38e66e18-bound-sa-token\") pod \"image-registry-5d9d95bf5b-h7bjs\" (UID: \"000e1067-5191-4860-98c5-54aa38e66e18\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h7bjs" Jan 22 14:19:22 crc kubenswrapper[5099]: I0122 14:19:22.940509 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/000e1067-5191-4860-98c5-54aa38e66e18-registry-tls\") pod \"image-registry-5d9d95bf5b-h7bjs\" (UID: \"000e1067-5191-4860-98c5-54aa38e66e18\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h7bjs" Jan 22 14:19:22 crc kubenswrapper[5099]: I0122 14:19:22.940531 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q8cr\" (UniqueName: \"kubernetes.io/projected/000e1067-5191-4860-98c5-54aa38e66e18-kube-api-access-4q8cr\") pod \"image-registry-5d9d95bf5b-h7bjs\" (UID: \"000e1067-5191-4860-98c5-54aa38e66e18\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h7bjs" Jan 22 14:19:22 crc kubenswrapper[5099]: I0122 14:19:22.940585 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/000e1067-5191-4860-98c5-54aa38e66e18-trusted-ca\") pod \"image-registry-5d9d95bf5b-h7bjs\" (UID: \"000e1067-5191-4860-98c5-54aa38e66e18\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h7bjs" Jan 22 14:19:22 crc kubenswrapper[5099]: I0122 14:19:22.940632 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-h7bjs\" (UID: \"000e1067-5191-4860-98c5-54aa38e66e18\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h7bjs" Jan 22 14:19:22 crc kubenswrapper[5099]: I0122 14:19:22.940653 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/000e1067-5191-4860-98c5-54aa38e66e18-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-h7bjs\" (UID: \"000e1067-5191-4860-98c5-54aa38e66e18\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h7bjs" Jan 22 14:19:22 crc kubenswrapper[5099]: I0122 14:19:22.996451 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-h7bjs\" (UID: \"000e1067-5191-4860-98c5-54aa38e66e18\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h7bjs" Jan 22 14:19:23 crc kubenswrapper[5099]: I0122 14:19:23.008937 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-52bqc"] Jan 22 14:19:23 crc kubenswrapper[5099]: W0122 14:19:23.013875 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25222624_29df_4640_a7d9_6840ef510d65.slice/crio-6a47a059e3bcb876a910bdc036fac1e90de760e74c43e1ac49be32cb9667ae3c WatchSource:0}: Error finding container 6a47a059e3bcb876a910bdc036fac1e90de760e74c43e1ac49be32cb9667ae3c: Status 404 returned error can't find the container with id 6a47a059e3bcb876a910bdc036fac1e90de760e74c43e1ac49be32cb9667ae3c Jan 22 14:19:23 crc kubenswrapper[5099]: I0122 14:19:23.042526 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/000e1067-5191-4860-98c5-54aa38e66e18-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-h7bjs\" (UID: \"000e1067-5191-4860-98c5-54aa38e66e18\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h7bjs" Jan 22 14:19:23 crc kubenswrapper[5099]: I0122 14:19:23.042784 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/000e1067-5191-4860-98c5-54aa38e66e18-bound-sa-token\") pod \"image-registry-5d9d95bf5b-h7bjs\" (UID: \"000e1067-5191-4860-98c5-54aa38e66e18\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h7bjs" Jan 22 14:19:23 crc kubenswrapper[5099]: I0122 14:19:23.042815 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/000e1067-5191-4860-98c5-54aa38e66e18-registry-tls\") pod \"image-registry-5d9d95bf5b-h7bjs\" (UID: \"000e1067-5191-4860-98c5-54aa38e66e18\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h7bjs" Jan 22 14:19:23 crc kubenswrapper[5099]: I0122 14:19:23.042838 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4q8cr\" (UniqueName: \"kubernetes.io/projected/000e1067-5191-4860-98c5-54aa38e66e18-kube-api-access-4q8cr\") pod \"image-registry-5d9d95bf5b-h7bjs\" (UID: \"000e1067-5191-4860-98c5-54aa38e66e18\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h7bjs" Jan 22 14:19:23 crc kubenswrapper[5099]: I0122 14:19:23.042880 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/000e1067-5191-4860-98c5-54aa38e66e18-trusted-ca\") pod \"image-registry-5d9d95bf5b-h7bjs\" (UID: \"000e1067-5191-4860-98c5-54aa38e66e18\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h7bjs" Jan 22 14:19:23 crc kubenswrapper[5099]: I0122 14:19:23.042916 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/000e1067-5191-4860-98c5-54aa38e66e18-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-h7bjs\" (UID: \"000e1067-5191-4860-98c5-54aa38e66e18\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h7bjs" Jan 22 14:19:23 crc kubenswrapper[5099]: I0122 14:19:23.042950 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/000e1067-5191-4860-98c5-54aa38e66e18-registry-certificates\") pod \"image-registry-5d9d95bf5b-h7bjs\" (UID: \"000e1067-5191-4860-98c5-54aa38e66e18\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h7bjs" Jan 22 14:19:23 crc kubenswrapper[5099]: I0122 14:19:23.043928 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/000e1067-5191-4860-98c5-54aa38e66e18-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-h7bjs\" (UID: \"000e1067-5191-4860-98c5-54aa38e66e18\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h7bjs" Jan 22 14:19:23 crc kubenswrapper[5099]: I0122 14:19:23.044924 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/000e1067-5191-4860-98c5-54aa38e66e18-registry-certificates\") pod \"image-registry-5d9d95bf5b-h7bjs\" (UID: \"000e1067-5191-4860-98c5-54aa38e66e18\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h7bjs" Jan 22 14:19:23 crc kubenswrapper[5099]: I0122 14:19:23.045792 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/000e1067-5191-4860-98c5-54aa38e66e18-trusted-ca\") pod \"image-registry-5d9d95bf5b-h7bjs\" (UID: \"000e1067-5191-4860-98c5-54aa38e66e18\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h7bjs" Jan 22 14:19:23 crc kubenswrapper[5099]: I0122 14:19:23.050252 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/000e1067-5191-4860-98c5-54aa38e66e18-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-h7bjs\" (UID: \"000e1067-5191-4860-98c5-54aa38e66e18\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h7bjs" Jan 22 14:19:23 crc kubenswrapper[5099]: I0122 14:19:23.050400 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/000e1067-5191-4860-98c5-54aa38e66e18-registry-tls\") pod \"image-registry-5d9d95bf5b-h7bjs\" (UID: \"000e1067-5191-4860-98c5-54aa38e66e18\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h7bjs" Jan 22 14:19:23 crc kubenswrapper[5099]: I0122 14:19:23.060949 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/000e1067-5191-4860-98c5-54aa38e66e18-bound-sa-token\") pod \"image-registry-5d9d95bf5b-h7bjs\" (UID: \"000e1067-5191-4860-98c5-54aa38e66e18\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h7bjs" Jan 22 14:19:23 crc kubenswrapper[5099]: I0122 14:19:23.064369 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q8cr\" (UniqueName: \"kubernetes.io/projected/000e1067-5191-4860-98c5-54aa38e66e18-kube-api-access-4q8cr\") pod \"image-registry-5d9d95bf5b-h7bjs\" (UID: \"000e1067-5191-4860-98c5-54aa38e66e18\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-h7bjs" Jan 22 14:19:23 crc kubenswrapper[5099]: I0122 14:19:23.133042 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-h7bjs" Jan 22 14:19:23 crc kubenswrapper[5099]: I0122 14:19:23.341739 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4t4cn" event={"ID":"d4d05a0d-9625-494b-a0a2-9ebd06498c18","Type":"ContainerStarted","Data":"faf042702711d699907aaf09013da66e744bdf007e1500b07e577fe5a5f94d75"} Jan 22 14:19:23 crc kubenswrapper[5099]: I0122 14:19:23.346117 5099 generic.go:358] "Generic (PLEG): container finished" podID="25222624-29df-4640-a7d9-6840ef510d65" containerID="68b9e00658a47df85e5c0c98250b7050609ae4ebc50e205c0b0d267a1d5bf6b6" exitCode=0 Jan 22 14:19:23 crc kubenswrapper[5099]: I0122 14:19:23.346350 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52bqc" event={"ID":"25222624-29df-4640-a7d9-6840ef510d65","Type":"ContainerDied","Data":"68b9e00658a47df85e5c0c98250b7050609ae4ebc50e205c0b0d267a1d5bf6b6"} Jan 22 14:19:23 crc kubenswrapper[5099]: I0122 14:19:23.346390 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52bqc" event={"ID":"25222624-29df-4640-a7d9-6840ef510d65","Type":"ContainerStarted","Data":"6a47a059e3bcb876a910bdc036fac1e90de760e74c43e1ac49be32cb9667ae3c"} Jan 22 14:19:23 crc kubenswrapper[5099]: I0122 14:19:23.368219 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xgcbh"] Jan 22 14:19:23 crc kubenswrapper[5099]: I0122 14:19:23.384737 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xgcbh" Jan 22 14:19:23 crc kubenswrapper[5099]: I0122 14:19:23.387686 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xgcbh"] Jan 22 14:19:23 crc kubenswrapper[5099]: I0122 14:19:23.396780 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4t4cn" podStartSLOduration=3.672965782 podStartE2EDuration="4.396747571s" podCreationTimestamp="2026-01-22 14:19:19 +0000 UTC" firstStartedPulling="2026-01-22 14:19:21.307195711 +0000 UTC m=+319.014945958" lastFinishedPulling="2026-01-22 14:19:22.03097751 +0000 UTC m=+319.738727747" observedRunningTime="2026-01-22 14:19:23.378600439 +0000 UTC m=+321.086350696" watchObservedRunningTime="2026-01-22 14:19:23.396747571 +0000 UTC m=+321.104497798" Jan 22 14:19:23 crc kubenswrapper[5099]: I0122 14:19:23.399063 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 22 14:19:23 crc kubenswrapper[5099]: I0122 14:19:23.485241 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-h7bjs"] Jan 22 14:19:23 crc kubenswrapper[5099]: I0122 14:19:23.551888 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c62bbdb2-7629-4a41-9333-09819afb184a-catalog-content\") pod \"certified-operators-xgcbh\" (UID: \"c62bbdb2-7629-4a41-9333-09819afb184a\") " pod="openshift-marketplace/certified-operators-xgcbh" Jan 22 14:19:23 crc kubenswrapper[5099]: I0122 14:19:23.552045 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c62bbdb2-7629-4a41-9333-09819afb184a-utilities\") pod \"certified-operators-xgcbh\" (UID: \"c62bbdb2-7629-4a41-9333-09819afb184a\") " pod="openshift-marketplace/certified-operators-xgcbh" Jan 22 14:19:23 crc kubenswrapper[5099]: I0122 14:19:23.552451 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dmz2\" (UniqueName: \"kubernetes.io/projected/c62bbdb2-7629-4a41-9333-09819afb184a-kube-api-access-9dmz2\") pod \"certified-operators-xgcbh\" (UID: \"c62bbdb2-7629-4a41-9333-09819afb184a\") " pod="openshift-marketplace/certified-operators-xgcbh" Jan 22 14:19:23 crc kubenswrapper[5099]: I0122 14:19:23.654605 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9dmz2\" (UniqueName: \"kubernetes.io/projected/c62bbdb2-7629-4a41-9333-09819afb184a-kube-api-access-9dmz2\") pod \"certified-operators-xgcbh\" (UID: \"c62bbdb2-7629-4a41-9333-09819afb184a\") " pod="openshift-marketplace/certified-operators-xgcbh" Jan 22 14:19:23 crc kubenswrapper[5099]: I0122 14:19:23.654959 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c62bbdb2-7629-4a41-9333-09819afb184a-catalog-content\") pod \"certified-operators-xgcbh\" (UID: \"c62bbdb2-7629-4a41-9333-09819afb184a\") " pod="openshift-marketplace/certified-operators-xgcbh" Jan 22 14:19:23 crc kubenswrapper[5099]: I0122 14:19:23.655031 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c62bbdb2-7629-4a41-9333-09819afb184a-utilities\") pod \"certified-operators-xgcbh\" (UID: \"c62bbdb2-7629-4a41-9333-09819afb184a\") " pod="openshift-marketplace/certified-operators-xgcbh" Jan 22 14:19:23 crc kubenswrapper[5099]: I0122 14:19:23.655542 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c62bbdb2-7629-4a41-9333-09819afb184a-catalog-content\") pod \"certified-operators-xgcbh\" (UID: \"c62bbdb2-7629-4a41-9333-09819afb184a\") " pod="openshift-marketplace/certified-operators-xgcbh" Jan 22 14:19:23 crc kubenswrapper[5099]: I0122 14:19:23.655559 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c62bbdb2-7629-4a41-9333-09819afb184a-utilities\") pod \"certified-operators-xgcbh\" (UID: \"c62bbdb2-7629-4a41-9333-09819afb184a\") " pod="openshift-marketplace/certified-operators-xgcbh" Jan 22 14:19:23 crc kubenswrapper[5099]: I0122 14:19:23.684260 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dmz2\" (UniqueName: \"kubernetes.io/projected/c62bbdb2-7629-4a41-9333-09819afb184a-kube-api-access-9dmz2\") pod \"certified-operators-xgcbh\" (UID: \"c62bbdb2-7629-4a41-9333-09819afb184a\") " pod="openshift-marketplace/certified-operators-xgcbh" Jan 22 14:19:23 crc kubenswrapper[5099]: I0122 14:19:23.780245 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xgcbh" Jan 22 14:19:24 crc kubenswrapper[5099]: I0122 14:19:24.018402 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xgcbh"] Jan 22 14:19:24 crc kubenswrapper[5099]: W0122 14:19:24.022072 5099 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc62bbdb2_7629_4a41_9333_09819afb184a.slice/crio-d973e540840d9fad3207390050034d0a753c99611304716a4303e762e1b21a6f WatchSource:0}: Error finding container d973e540840d9fad3207390050034d0a753c99611304716a4303e762e1b21a6f: Status 404 returned error can't find the container with id d973e540840d9fad3207390050034d0a753c99611304716a4303e762e1b21a6f Jan 22 14:19:24 crc kubenswrapper[5099]: I0122 14:19:24.371072 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-h7bjs" event={"ID":"000e1067-5191-4860-98c5-54aa38e66e18","Type":"ContainerStarted","Data":"8bf40d52428652819408eadd9da04877b7c77ef4ebc52dc5a2d430f162782848"} Jan 22 14:19:24 crc kubenswrapper[5099]: I0122 14:19:24.371717 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-h7bjs" event={"ID":"000e1067-5191-4860-98c5-54aa38e66e18","Type":"ContainerStarted","Data":"a069ba367fd7ae6f3dd6f8000347329bd0b657de0dbee2c19546a0fa67c70661"} Jan 22 14:19:24 crc kubenswrapper[5099]: I0122 14:19:24.372903 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-h7bjs" Jan 22 14:19:24 crc kubenswrapper[5099]: I0122 14:19:24.375740 5099 generic.go:358] "Generic (PLEG): container finished" podID="f4a52a56-c643-431a-b270-c92429b4e328" containerID="f63828bdbf544903ff9eb379ff97c45047fa4f7687170fcbe7febf41d944f833" exitCode=0 Jan 22 14:19:24 crc kubenswrapper[5099]: I0122 14:19:24.375994 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pw4sz" event={"ID":"f4a52a56-c643-431a-b270-c92429b4e328","Type":"ContainerDied","Data":"f63828bdbf544903ff9eb379ff97c45047fa4f7687170fcbe7febf41d944f833"} Jan 22 14:19:24 crc kubenswrapper[5099]: I0122 14:19:24.379099 5099 generic.go:358] "Generic (PLEG): container finished" podID="c62bbdb2-7629-4a41-9333-09819afb184a" containerID="eb9a13a1171f575e4de7ca7639809053e47d878287fecfae4ba5f5ac5e1f4def" exitCode=0 Jan 22 14:19:24 crc kubenswrapper[5099]: I0122 14:19:24.379236 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgcbh" event={"ID":"c62bbdb2-7629-4a41-9333-09819afb184a","Type":"ContainerDied","Data":"eb9a13a1171f575e4de7ca7639809053e47d878287fecfae4ba5f5ac5e1f4def"} Jan 22 14:19:24 crc kubenswrapper[5099]: I0122 14:19:24.379263 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgcbh" event={"ID":"c62bbdb2-7629-4a41-9333-09819afb184a","Type":"ContainerStarted","Data":"d973e540840d9fad3207390050034d0a753c99611304716a4303e762e1b21a6f"} Jan 22 14:19:24 crc kubenswrapper[5099]: I0122 14:19:24.385409 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52bqc" event={"ID":"25222624-29df-4640-a7d9-6840ef510d65","Type":"ContainerStarted","Data":"2b5749270eb566f0b64ee1dfe9f96b9ffd7d2c639a0a423d911980e0915036ed"} Jan 22 14:19:24 crc kubenswrapper[5099]: I0122 14:19:24.431952 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-h7bjs" podStartSLOduration=2.431927731 podStartE2EDuration="2.431927731s" podCreationTimestamp="2026-01-22 14:19:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:19:24.405674098 +0000 UTC m=+322.113424345" watchObservedRunningTime="2026-01-22 14:19:24.431927731 +0000 UTC m=+322.139677968" Jan 22 14:19:25 crc kubenswrapper[5099]: I0122 14:19:25.395839 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pw4sz" event={"ID":"f4a52a56-c643-431a-b270-c92429b4e328","Type":"ContainerStarted","Data":"6f816e7898c66d2cb27d325efb4cc206a01904eb45ba0b47517774ec44de1a74"} Jan 22 14:19:25 crc kubenswrapper[5099]: I0122 14:19:25.422340 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgcbh" event={"ID":"c62bbdb2-7629-4a41-9333-09819afb184a","Type":"ContainerStarted","Data":"730de9fd86fd72169f22467b9a3837473bf6412dca36ed9a5a85ae19c3898e82"} Jan 22 14:19:25 crc kubenswrapper[5099]: I0122 14:19:25.436401 5099 generic.go:358] "Generic (PLEG): container finished" podID="25222624-29df-4640-a7d9-6840ef510d65" containerID="2b5749270eb566f0b64ee1dfe9f96b9ffd7d2c639a0a423d911980e0915036ed" exitCode=0 Jan 22 14:19:25 crc kubenswrapper[5099]: I0122 14:19:25.437103 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pw4sz" podStartSLOduration=4.614258795 podStartE2EDuration="5.43706573s" podCreationTimestamp="2026-01-22 14:19:20 +0000 UTC" firstStartedPulling="2026-01-22 14:19:22.322547286 +0000 UTC m=+320.030297523" lastFinishedPulling="2026-01-22 14:19:23.145354221 +0000 UTC m=+320.853104458" observedRunningTime="2026-01-22 14:19:25.430752015 +0000 UTC m=+323.138502252" watchObservedRunningTime="2026-01-22 14:19:25.43706573 +0000 UTC m=+323.144815967" Jan 22 14:19:25 crc kubenswrapper[5099]: I0122 14:19:25.437361 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52bqc" event={"ID":"25222624-29df-4640-a7d9-6840ef510d65","Type":"ContainerDied","Data":"2b5749270eb566f0b64ee1dfe9f96b9ffd7d2c639a0a423d911980e0915036ed"} Jan 22 14:19:25 crc kubenswrapper[5099]: E0122 14:19:25.763252 5099 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb7b07c4_a956_458e_abaa_06dc68b5e359.slice/crio-bbf5c6314168fa717cfb7f8a84d41b4b842535a5e24216b9580bd43aabb6ceb1\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb7b07c4_a956_458e_abaa_06dc68b5e359.slice\": RecentStats: unable to find data in memory cache]" Jan 22 14:19:26 crc kubenswrapper[5099]: I0122 14:19:26.447624 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52bqc" event={"ID":"25222624-29df-4640-a7d9-6840ef510d65","Type":"ContainerStarted","Data":"00f017783d065eef64039a0ff8f7aee6dbd136199a7f2b47758399bcdbd00a79"} Jan 22 14:19:26 crc kubenswrapper[5099]: I0122 14:19:26.450720 5099 generic.go:358] "Generic (PLEG): container finished" podID="c62bbdb2-7629-4a41-9333-09819afb184a" containerID="730de9fd86fd72169f22467b9a3837473bf6412dca36ed9a5a85ae19c3898e82" exitCode=0 Jan 22 14:19:26 crc kubenswrapper[5099]: I0122 14:19:26.450850 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgcbh" event={"ID":"c62bbdb2-7629-4a41-9333-09819afb184a","Type":"ContainerDied","Data":"730de9fd86fd72169f22467b9a3837473bf6412dca36ed9a5a85ae19c3898e82"} Jan 22 14:19:26 crc kubenswrapper[5099]: I0122 14:19:26.450959 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgcbh" event={"ID":"c62bbdb2-7629-4a41-9333-09819afb184a","Type":"ContainerStarted","Data":"38d5b053345047424c91185211955097d58aceba8a4ed0734abb138c671d8a7c"} Jan 22 14:19:26 crc kubenswrapper[5099]: I0122 14:19:26.484861 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-52bqc" podStartSLOduration=3.836568642 podStartE2EDuration="4.484826947s" podCreationTimestamp="2026-01-22 14:19:22 +0000 UTC" firstStartedPulling="2026-01-22 14:19:23.347391586 +0000 UTC m=+321.055141823" lastFinishedPulling="2026-01-22 14:19:23.995649891 +0000 UTC m=+321.703400128" observedRunningTime="2026-01-22 14:19:26.479388156 +0000 UTC m=+324.187138413" watchObservedRunningTime="2026-01-22 14:19:26.484826947 +0000 UTC m=+324.192577184" Jan 22 14:19:26 crc kubenswrapper[5099]: I0122 14:19:26.505536 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xgcbh" podStartSLOduration=2.854982993 podStartE2EDuration="3.505511686s" podCreationTimestamp="2026-01-22 14:19:23 +0000 UTC" firstStartedPulling="2026-01-22 14:19:24.379794625 +0000 UTC m=+322.087544852" lastFinishedPulling="2026-01-22 14:19:25.030323308 +0000 UTC m=+322.738073545" observedRunningTime="2026-01-22 14:19:26.504261373 +0000 UTC m=+324.212011620" watchObservedRunningTime="2026-01-22 14:19:26.505511686 +0000 UTC m=+324.213261923" Jan 22 14:19:30 crc kubenswrapper[5099]: I0122 14:19:30.346653 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4t4cn" Jan 22 14:19:30 crc kubenswrapper[5099]: I0122 14:19:30.347550 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-4t4cn" Jan 22 14:19:30 crc kubenswrapper[5099]: I0122 14:19:30.402816 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4t4cn" Jan 22 14:19:30 crc kubenswrapper[5099]: I0122 14:19:30.531445 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4t4cn" Jan 22 14:19:31 crc kubenswrapper[5099]: I0122 14:19:31.294312 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-pw4sz" Jan 22 14:19:31 crc kubenswrapper[5099]: I0122 14:19:31.294752 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pw4sz" Jan 22 14:19:31 crc kubenswrapper[5099]: I0122 14:19:31.342638 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pw4sz" Jan 22 14:19:31 crc kubenswrapper[5099]: I0122 14:19:31.534844 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pw4sz" Jan 22 14:19:32 crc kubenswrapper[5099]: I0122 14:19:32.743499 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-52bqc" Jan 22 14:19:32 crc kubenswrapper[5099]: I0122 14:19:32.744235 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-52bqc" Jan 22 14:19:32 crc kubenswrapper[5099]: I0122 14:19:32.801597 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-52bqc" Jan 22 14:19:33 crc kubenswrapper[5099]: I0122 14:19:33.537320 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-52bqc" Jan 22 14:19:33 crc kubenswrapper[5099]: I0122 14:19:33.781523 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xgcbh" Jan 22 14:19:33 crc kubenswrapper[5099]: I0122 14:19:33.782820 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-xgcbh" Jan 22 14:19:33 crc kubenswrapper[5099]: I0122 14:19:33.835367 5099 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xgcbh" Jan 22 14:19:34 crc kubenswrapper[5099]: I0122 14:19:34.559280 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xgcbh" Jan 22 14:19:35 crc kubenswrapper[5099]: E0122 14:19:35.905870 5099 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb7b07c4_a956_458e_abaa_06dc68b5e359.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb7b07c4_a956_458e_abaa_06dc68b5e359.slice/crio-bbf5c6314168fa717cfb7f8a84d41b4b842535a5e24216b9580bd43aabb6ceb1\": RecentStats: unable to find data in memory cache]" Jan 22 14:19:46 crc kubenswrapper[5099]: E0122 14:19:46.030236 5099 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb7b07c4_a956_458e_abaa_06dc68b5e359.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb7b07c4_a956_458e_abaa_06dc68b5e359.slice/crio-bbf5c6314168fa717cfb7f8a84d41b4b842535a5e24216b9580bd43aabb6ceb1\": RecentStats: unable to find data in memory cache]" Jan 22 14:19:46 crc kubenswrapper[5099]: I0122 14:19:46.458419 5099 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-h7bjs" Jan 22 14:19:46 crc kubenswrapper[5099]: I0122 14:19:46.527422 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-psxhg"] Jan 22 14:19:56 crc kubenswrapper[5099]: E0122 14:19:56.162490 5099 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb7b07c4_a956_458e_abaa_06dc68b5e359.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb7b07c4_a956_458e_abaa_06dc68b5e359.slice/crio-bbf5c6314168fa717cfb7f8a84d41b4b842535a5e24216b9580bd43aabb6ceb1\": RecentStats: unable to find data in memory cache]" Jan 22 14:20:00 crc kubenswrapper[5099]: I0122 14:20:00.206042 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484860-6zpp7"] Jan 22 14:20:00 crc kubenswrapper[5099]: I0122 14:20:00.229714 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484860-6zpp7"] Jan 22 14:20:00 crc kubenswrapper[5099]: I0122 14:20:00.229893 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484860-6zpp7" Jan 22 14:20:00 crc kubenswrapper[5099]: I0122 14:20:00.234335 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 14:20:00 crc kubenswrapper[5099]: I0122 14:20:00.234558 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lvbxf\"" Jan 22 14:20:00 crc kubenswrapper[5099]: I0122 14:20:00.235049 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 14:20:00 crc kubenswrapper[5099]: I0122 14:20:00.300991 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmxgj\" (UniqueName: \"kubernetes.io/projected/17104002-b9e3-479f-b59f-d90796730b1c-kube-api-access-vmxgj\") pod \"auto-csr-approver-29484860-6zpp7\" (UID: \"17104002-b9e3-479f-b59f-d90796730b1c\") " pod="openshift-infra/auto-csr-approver-29484860-6zpp7" Jan 22 14:20:00 crc kubenswrapper[5099]: I0122 14:20:00.403082 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vmxgj\" (UniqueName: \"kubernetes.io/projected/17104002-b9e3-479f-b59f-d90796730b1c-kube-api-access-vmxgj\") pod \"auto-csr-approver-29484860-6zpp7\" (UID: \"17104002-b9e3-479f-b59f-d90796730b1c\") " pod="openshift-infra/auto-csr-approver-29484860-6zpp7" Jan 22 14:20:00 crc kubenswrapper[5099]: I0122 14:20:00.429588 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmxgj\" (UniqueName: \"kubernetes.io/projected/17104002-b9e3-479f-b59f-d90796730b1c-kube-api-access-vmxgj\") pod \"auto-csr-approver-29484860-6zpp7\" (UID: \"17104002-b9e3-479f-b59f-d90796730b1c\") " pod="openshift-infra/auto-csr-approver-29484860-6zpp7" Jan 22 14:20:00 crc kubenswrapper[5099]: I0122 14:20:00.553683 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484860-6zpp7" Jan 22 14:20:01 crc kubenswrapper[5099]: I0122 14:20:01.036801 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484860-6zpp7"] Jan 22 14:20:01 crc kubenswrapper[5099]: I0122 14:20:01.698765 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484860-6zpp7" event={"ID":"17104002-b9e3-479f-b59f-d90796730b1c","Type":"ContainerStarted","Data":"d3b63e5f87dc1e2506f61d23764b063aa479b2102c72ff9d4d148ac2e1ea1c1b"} Jan 22 14:20:05 crc kubenswrapper[5099]: I0122 14:20:05.255627 5099 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-qgd8l" Jan 22 14:20:05 crc kubenswrapper[5099]: I0122 14:20:05.284441 5099 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-qgd8l" Jan 22 14:20:05 crc kubenswrapper[5099]: I0122 14:20:05.735645 5099 generic.go:358] "Generic (PLEG): container finished" podID="17104002-b9e3-479f-b59f-d90796730b1c" containerID="e13103095df04d06210020925b56d2fca2684199d5a17e6e7205a2f82747b69b" exitCode=0 Jan 22 14:20:05 crc kubenswrapper[5099]: I0122 14:20:05.735837 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484860-6zpp7" event={"ID":"17104002-b9e3-479f-b59f-d90796730b1c","Type":"ContainerDied","Data":"e13103095df04d06210020925b56d2fca2684199d5a17e6e7205a2f82747b69b"} Jan 22 14:20:06 crc kubenswrapper[5099]: I0122 14:20:06.285476 5099 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-02-21 14:15:05 +0000 UTC" deadline="2026-02-16 02:25:27.583445906 +0000 UTC" Jan 22 14:20:06 crc kubenswrapper[5099]: I0122 14:20:06.285540 5099 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="588h5m21.297911545s" Jan 22 14:20:07 crc kubenswrapper[5099]: I0122 14:20:07.089858 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484860-6zpp7" Jan 22 14:20:07 crc kubenswrapper[5099]: I0122 14:20:07.236361 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmxgj\" (UniqueName: \"kubernetes.io/projected/17104002-b9e3-479f-b59f-d90796730b1c-kube-api-access-vmxgj\") pod \"17104002-b9e3-479f-b59f-d90796730b1c\" (UID: \"17104002-b9e3-479f-b59f-d90796730b1c\") " Jan 22 14:20:07 crc kubenswrapper[5099]: I0122 14:20:07.251106 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17104002-b9e3-479f-b59f-d90796730b1c-kube-api-access-vmxgj" (OuterVolumeSpecName: "kube-api-access-vmxgj") pod "17104002-b9e3-479f-b59f-d90796730b1c" (UID: "17104002-b9e3-479f-b59f-d90796730b1c"). InnerVolumeSpecName "kube-api-access-vmxgj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:20:07 crc kubenswrapper[5099]: I0122 14:20:07.286347 5099 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-02-21 14:15:05 +0000 UTC" deadline="2026-02-16 05:30:18.992904786 +0000 UTC" Jan 22 14:20:07 crc kubenswrapper[5099]: I0122 14:20:07.286398 5099 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="591h10m11.706509247s" Jan 22 14:20:07 crc kubenswrapper[5099]: I0122 14:20:07.338193 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vmxgj\" (UniqueName: \"kubernetes.io/projected/17104002-b9e3-479f-b59f-d90796730b1c-kube-api-access-vmxgj\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:07 crc kubenswrapper[5099]: I0122 14:20:07.753647 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484860-6zpp7" Jan 22 14:20:07 crc kubenswrapper[5099]: I0122 14:20:07.753704 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484860-6zpp7" event={"ID":"17104002-b9e3-479f-b59f-d90796730b1c","Type":"ContainerDied","Data":"d3b63e5f87dc1e2506f61d23764b063aa479b2102c72ff9d4d148ac2e1ea1c1b"} Jan 22 14:20:07 crc kubenswrapper[5099]: I0122 14:20:07.753803 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3b63e5f87dc1e2506f61d23764b063aa479b2102c72ff9d4d148ac2e1ea1c1b" Jan 22 14:20:11 crc kubenswrapper[5099]: I0122 14:20:11.575781 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-psxhg" podUID="00219034-b44a-4db2-ad80-b04ff5eacac5" containerName="registry" containerID="cri-o://a07a00489ee7df323293e4f87c95150d5e885e8bc4f53c635f76d39b8f37e7ba" gracePeriod=30 Jan 22 14:20:12 crc kubenswrapper[5099]: I0122 14:20:12.486911 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:20:12 crc kubenswrapper[5099]: I0122 14:20:12.526810 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/00219034-b44a-4db2-ad80-b04ff5eacac5-installation-pull-secrets\") pod \"00219034-b44a-4db2-ad80-b04ff5eacac5\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " Jan 22 14:20:12 crc kubenswrapper[5099]: I0122 14:20:12.526932 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztwtl\" (UniqueName: \"kubernetes.io/projected/00219034-b44a-4db2-ad80-b04ff5eacac5-kube-api-access-ztwtl\") pod \"00219034-b44a-4db2-ad80-b04ff5eacac5\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " Jan 22 14:20:12 crc kubenswrapper[5099]: I0122 14:20:12.527222 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/00219034-b44a-4db2-ad80-b04ff5eacac5-trusted-ca\") pod \"00219034-b44a-4db2-ad80-b04ff5eacac5\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " Jan 22 14:20:12 crc kubenswrapper[5099]: I0122 14:20:12.527324 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/00219034-b44a-4db2-ad80-b04ff5eacac5-registry-certificates\") pod \"00219034-b44a-4db2-ad80-b04ff5eacac5\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " Jan 22 14:20:12 crc kubenswrapper[5099]: I0122 14:20:12.527768 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/00219034-b44a-4db2-ad80-b04ff5eacac5-bound-sa-token\") pod \"00219034-b44a-4db2-ad80-b04ff5eacac5\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " Jan 22 14:20:12 crc kubenswrapper[5099]: I0122 14:20:12.530128 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00219034-b44a-4db2-ad80-b04ff5eacac5-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "00219034-b44a-4db2-ad80-b04ff5eacac5" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:20:12 crc kubenswrapper[5099]: I0122 14:20:12.530250 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/00219034-b44a-4db2-ad80-b04ff5eacac5-registry-tls\") pod \"00219034-b44a-4db2-ad80-b04ff5eacac5\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " Jan 22 14:20:12 crc kubenswrapper[5099]: I0122 14:20:12.530400 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/00219034-b44a-4db2-ad80-b04ff5eacac5-ca-trust-extracted\") pod \"00219034-b44a-4db2-ad80-b04ff5eacac5\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " Jan 22 14:20:12 crc kubenswrapper[5099]: I0122 14:20:12.530151 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00219034-b44a-4db2-ad80-b04ff5eacac5-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "00219034-b44a-4db2-ad80-b04ff5eacac5" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:20:12 crc kubenswrapper[5099]: I0122 14:20:12.532183 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"00219034-b44a-4db2-ad80-b04ff5eacac5\" (UID: \"00219034-b44a-4db2-ad80-b04ff5eacac5\") " Jan 22 14:20:12 crc kubenswrapper[5099]: I0122 14:20:12.533837 5099 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/00219034-b44a-4db2-ad80-b04ff5eacac5-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:12 crc kubenswrapper[5099]: I0122 14:20:12.533893 5099 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/00219034-b44a-4db2-ad80-b04ff5eacac5-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:12 crc kubenswrapper[5099]: I0122 14:20:12.537408 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00219034-b44a-4db2-ad80-b04ff5eacac5-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "00219034-b44a-4db2-ad80-b04ff5eacac5" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:20:12 crc kubenswrapper[5099]: I0122 14:20:12.537933 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00219034-b44a-4db2-ad80-b04ff5eacac5-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "00219034-b44a-4db2-ad80-b04ff5eacac5" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:20:12 crc kubenswrapper[5099]: I0122 14:20:12.538406 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00219034-b44a-4db2-ad80-b04ff5eacac5-kube-api-access-ztwtl" (OuterVolumeSpecName: "kube-api-access-ztwtl") pod "00219034-b44a-4db2-ad80-b04ff5eacac5" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5"). InnerVolumeSpecName "kube-api-access-ztwtl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:20:12 crc kubenswrapper[5099]: I0122 14:20:12.541839 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00219034-b44a-4db2-ad80-b04ff5eacac5-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "00219034-b44a-4db2-ad80-b04ff5eacac5" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:20:12 crc kubenswrapper[5099]: I0122 14:20:12.544087 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "00219034-b44a-4db2-ad80-b04ff5eacac5" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 22 14:20:12 crc kubenswrapper[5099]: I0122 14:20:12.554433 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00219034-b44a-4db2-ad80-b04ff5eacac5-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "00219034-b44a-4db2-ad80-b04ff5eacac5" (UID: "00219034-b44a-4db2-ad80-b04ff5eacac5"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:20:12 crc kubenswrapper[5099]: I0122 14:20:12.635859 5099 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/00219034-b44a-4db2-ad80-b04ff5eacac5-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:12 crc kubenswrapper[5099]: I0122 14:20:12.635918 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ztwtl\" (UniqueName: \"kubernetes.io/projected/00219034-b44a-4db2-ad80-b04ff5eacac5-kube-api-access-ztwtl\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:12 crc kubenswrapper[5099]: I0122 14:20:12.635935 5099 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/00219034-b44a-4db2-ad80-b04ff5eacac5-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:12 crc kubenswrapper[5099]: I0122 14:20:12.635949 5099 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/00219034-b44a-4db2-ad80-b04ff5eacac5-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:12 crc kubenswrapper[5099]: I0122 14:20:12.635965 5099 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/00219034-b44a-4db2-ad80-b04ff5eacac5-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:12 crc kubenswrapper[5099]: I0122 14:20:12.801185 5099 generic.go:358] "Generic (PLEG): container finished" podID="00219034-b44a-4db2-ad80-b04ff5eacac5" containerID="a07a00489ee7df323293e4f87c95150d5e885e8bc4f53c635f76d39b8f37e7ba" exitCode=0 Jan 22 14:20:12 crc kubenswrapper[5099]: I0122 14:20:12.801286 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-psxhg" event={"ID":"00219034-b44a-4db2-ad80-b04ff5eacac5","Type":"ContainerDied","Data":"a07a00489ee7df323293e4f87c95150d5e885e8bc4f53c635f76d39b8f37e7ba"} Jan 22 14:20:12 crc kubenswrapper[5099]: I0122 14:20:12.801325 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-psxhg" event={"ID":"00219034-b44a-4db2-ad80-b04ff5eacac5","Type":"ContainerDied","Data":"be36cb98065791dd270cf007b6efe356edbf0749dd26dcbd724c5a006702b190"} Jan 22 14:20:12 crc kubenswrapper[5099]: I0122 14:20:12.801347 5099 scope.go:117] "RemoveContainer" containerID="a07a00489ee7df323293e4f87c95150d5e885e8bc4f53c635f76d39b8f37e7ba" Jan 22 14:20:12 crc kubenswrapper[5099]: I0122 14:20:12.801540 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-psxhg" Jan 22 14:20:12 crc kubenswrapper[5099]: I0122 14:20:12.827514 5099 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-psxhg"] Jan 22 14:20:12 crc kubenswrapper[5099]: I0122 14:20:12.831467 5099 scope.go:117] "RemoveContainer" containerID="a07a00489ee7df323293e4f87c95150d5e885e8bc4f53c635f76d39b8f37e7ba" Jan 22 14:20:12 crc kubenswrapper[5099]: E0122 14:20:12.831950 5099 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a07a00489ee7df323293e4f87c95150d5e885e8bc4f53c635f76d39b8f37e7ba\": container with ID starting with a07a00489ee7df323293e4f87c95150d5e885e8bc4f53c635f76d39b8f37e7ba not found: ID does not exist" containerID="a07a00489ee7df323293e4f87c95150d5e885e8bc4f53c635f76d39b8f37e7ba" Jan 22 14:20:12 crc kubenswrapper[5099]: I0122 14:20:12.831993 5099 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a07a00489ee7df323293e4f87c95150d5e885e8bc4f53c635f76d39b8f37e7ba"} err="failed to get container status \"a07a00489ee7df323293e4f87c95150d5e885e8bc4f53c635f76d39b8f37e7ba\": rpc error: code = NotFound desc = could not find container \"a07a00489ee7df323293e4f87c95150d5e885e8bc4f53c635f76d39b8f37e7ba\": container with ID starting with a07a00489ee7df323293e4f87c95150d5e885e8bc4f53c635f76d39b8f37e7ba not found: ID does not exist" Jan 22 14:20:12 crc kubenswrapper[5099]: I0122 14:20:12.832598 5099 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-psxhg"] Jan 22 14:20:14 crc kubenswrapper[5099]: I0122 14:20:14.768637 5099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00219034-b44a-4db2-ad80-b04ff5eacac5" path="/var/lib/kubelet/pods/00219034-b44a-4db2-ad80-b04ff5eacac5/volumes" Jan 22 14:20:40 crc kubenswrapper[5099]: I0122 14:20:40.117463 5099 patch_prober.go:28] interesting pod/machine-config-daemon-88wst container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 14:20:40 crc kubenswrapper[5099]: I0122 14:20:40.118340 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-88wst" podUID="4620190f-fea2-4e88-8a94-8e1bd1e1db12" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 14:21:10 crc kubenswrapper[5099]: I0122 14:21:10.116087 5099 patch_prober.go:28] interesting pod/machine-config-daemon-88wst container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 14:21:10 crc kubenswrapper[5099]: I0122 14:21:10.117121 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-88wst" podUID="4620190f-fea2-4e88-8a94-8e1bd1e1db12" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 14:21:40 crc kubenswrapper[5099]: I0122 14:21:40.115906 5099 patch_prober.go:28] interesting pod/machine-config-daemon-88wst container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 14:21:40 crc kubenswrapper[5099]: I0122 14:21:40.116936 5099 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-88wst" podUID="4620190f-fea2-4e88-8a94-8e1bd1e1db12" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 14:21:40 crc kubenswrapper[5099]: I0122 14:21:40.117025 5099 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-88wst" Jan 22 14:21:40 crc kubenswrapper[5099]: I0122 14:21:40.117947 5099 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aa9ae15fe4ad370e9704f2528ddefeed5df950fd647a16eace643fbf5d0953c4"} pod="openshift-machine-config-operator/machine-config-daemon-88wst" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 14:21:40 crc kubenswrapper[5099]: I0122 14:21:40.118034 5099 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-88wst" podUID="4620190f-fea2-4e88-8a94-8e1bd1e1db12" containerName="machine-config-daemon" containerID="cri-o://aa9ae15fe4ad370e9704f2528ddefeed5df950fd647a16eace643fbf5d0953c4" gracePeriod=600 Jan 22 14:21:40 crc kubenswrapper[5099]: I0122 14:21:40.474280 5099 generic.go:358] "Generic (PLEG): container finished" podID="4620190f-fea2-4e88-8a94-8e1bd1e1db12" containerID="aa9ae15fe4ad370e9704f2528ddefeed5df950fd647a16eace643fbf5d0953c4" exitCode=0 Jan 22 14:21:40 crc kubenswrapper[5099]: I0122 14:21:40.474389 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-88wst" event={"ID":"4620190f-fea2-4e88-8a94-8e1bd1e1db12","Type":"ContainerDied","Data":"aa9ae15fe4ad370e9704f2528ddefeed5df950fd647a16eace643fbf5d0953c4"} Jan 22 14:21:40 crc kubenswrapper[5099]: I0122 14:21:40.474981 5099 scope.go:117] "RemoveContainer" containerID="3ac81f6d12ca007b4df78462924c542cd05b380336744c2369659da7b3d6d554" Jan 22 14:21:41 crc kubenswrapper[5099]: I0122 14:21:41.485945 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-88wst" event={"ID":"4620190f-fea2-4e88-8a94-8e1bd1e1db12","Type":"ContainerStarted","Data":"2c40ff202404feceff364136a9cdafa979959ae5921fded5965463c5d3e05008"} Jan 22 14:22:00 crc kubenswrapper[5099]: I0122 14:22:00.135588 5099 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484862-dbwtj"] Jan 22 14:22:00 crc kubenswrapper[5099]: I0122 14:22:00.137571 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="17104002-b9e3-479f-b59f-d90796730b1c" containerName="oc" Jan 22 14:22:00 crc kubenswrapper[5099]: I0122 14:22:00.137597 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="17104002-b9e3-479f-b59f-d90796730b1c" containerName="oc" Jan 22 14:22:00 crc kubenswrapper[5099]: I0122 14:22:00.137617 5099 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="00219034-b44a-4db2-ad80-b04ff5eacac5" containerName="registry" Jan 22 14:22:00 crc kubenswrapper[5099]: I0122 14:22:00.137628 5099 state_mem.go:107] "Deleted CPUSet assignment" podUID="00219034-b44a-4db2-ad80-b04ff5eacac5" containerName="registry" Jan 22 14:22:00 crc kubenswrapper[5099]: I0122 14:22:00.137819 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="00219034-b44a-4db2-ad80-b04ff5eacac5" containerName="registry" Jan 22 14:22:00 crc kubenswrapper[5099]: I0122 14:22:00.137846 5099 memory_manager.go:356] "RemoveStaleState removing state" podUID="17104002-b9e3-479f-b59f-d90796730b1c" containerName="oc" Jan 22 14:22:00 crc kubenswrapper[5099]: I0122 14:22:00.309532 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484862-dbwtj"] Jan 22 14:22:00 crc kubenswrapper[5099]: I0122 14:22:00.309809 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484862-dbwtj" Jan 22 14:22:00 crc kubenswrapper[5099]: I0122 14:22:00.314107 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 14:22:00 crc kubenswrapper[5099]: I0122 14:22:00.314235 5099 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lvbxf\"" Jan 22 14:22:00 crc kubenswrapper[5099]: I0122 14:22:00.314423 5099 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 14:22:00 crc kubenswrapper[5099]: I0122 14:22:00.382099 5099 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxzpb\" (UniqueName: \"kubernetes.io/projected/26104a09-24a6-465b-a681-9bade15872bd-kube-api-access-gxzpb\") pod \"auto-csr-approver-29484862-dbwtj\" (UID: \"26104a09-24a6-465b-a681-9bade15872bd\") " pod="openshift-infra/auto-csr-approver-29484862-dbwtj" Jan 22 14:22:00 crc kubenswrapper[5099]: I0122 14:22:00.483614 5099 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gxzpb\" (UniqueName: \"kubernetes.io/projected/26104a09-24a6-465b-a681-9bade15872bd-kube-api-access-gxzpb\") pod \"auto-csr-approver-29484862-dbwtj\" (UID: \"26104a09-24a6-465b-a681-9bade15872bd\") " pod="openshift-infra/auto-csr-approver-29484862-dbwtj" Jan 22 14:22:00 crc kubenswrapper[5099]: I0122 14:22:00.509889 5099 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxzpb\" (UniqueName: \"kubernetes.io/projected/26104a09-24a6-465b-a681-9bade15872bd-kube-api-access-gxzpb\") pod \"auto-csr-approver-29484862-dbwtj\" (UID: \"26104a09-24a6-465b-a681-9bade15872bd\") " pod="openshift-infra/auto-csr-approver-29484862-dbwtj" Jan 22 14:22:00 crc kubenswrapper[5099]: I0122 14:22:00.632650 5099 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484862-dbwtj" Jan 22 14:22:00 crc kubenswrapper[5099]: I0122 14:22:00.962157 5099 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484862-dbwtj"] Jan 22 14:22:01 crc kubenswrapper[5099]: I0122 14:22:01.643921 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484862-dbwtj" event={"ID":"26104a09-24a6-465b-a681-9bade15872bd","Type":"ContainerStarted","Data":"c248621f550d5786081e0c8ca56edc784649a2f8b241e1d0a40aa639e865e36b"} Jan 22 14:22:02 crc kubenswrapper[5099]: I0122 14:22:02.653841 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484862-dbwtj" event={"ID":"26104a09-24a6-465b-a681-9bade15872bd","Type":"ContainerStarted","Data":"90c4d21bac4027c76fdfbbd0052439b7cecfb7502c7ee6d94bf7a08051efb267"} Jan 22 14:22:02 crc kubenswrapper[5099]: I0122 14:22:02.675296 5099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29484862-dbwtj" podStartSLOduration=1.536019397 podStartE2EDuration="2.675265904s" podCreationTimestamp="2026-01-22 14:22:00 +0000 UTC" firstStartedPulling="2026-01-22 14:22:00.969742358 +0000 UTC m=+478.677492615" lastFinishedPulling="2026-01-22 14:22:02.108988885 +0000 UTC m=+479.816739122" observedRunningTime="2026-01-22 14:22:02.672321572 +0000 UTC m=+480.380071829" watchObservedRunningTime="2026-01-22 14:22:02.675265904 +0000 UTC m=+480.383016151" Jan 22 14:22:03 crc kubenswrapper[5099]: I0122 14:22:03.664813 5099 generic.go:358] "Generic (PLEG): container finished" podID="26104a09-24a6-465b-a681-9bade15872bd" containerID="90c4d21bac4027c76fdfbbd0052439b7cecfb7502c7ee6d94bf7a08051efb267" exitCode=0 Jan 22 14:22:03 crc kubenswrapper[5099]: I0122 14:22:03.665007 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484862-dbwtj" event={"ID":"26104a09-24a6-465b-a681-9bade15872bd","Type":"ContainerDied","Data":"90c4d21bac4027c76fdfbbd0052439b7cecfb7502c7ee6d94bf7a08051efb267"} Jan 22 14:22:04 crc kubenswrapper[5099]: I0122 14:22:04.993435 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484862-dbwtj" Jan 22 14:22:05 crc kubenswrapper[5099]: I0122 14:22:05.169650 5099 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxzpb\" (UniqueName: \"kubernetes.io/projected/26104a09-24a6-465b-a681-9bade15872bd-kube-api-access-gxzpb\") pod \"26104a09-24a6-465b-a681-9bade15872bd\" (UID: \"26104a09-24a6-465b-a681-9bade15872bd\") " Jan 22 14:22:05 crc kubenswrapper[5099]: I0122 14:22:05.182263 5099 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26104a09-24a6-465b-a681-9bade15872bd-kube-api-access-gxzpb" (OuterVolumeSpecName: "kube-api-access-gxzpb") pod "26104a09-24a6-465b-a681-9bade15872bd" (UID: "26104a09-24a6-465b-a681-9bade15872bd"). InnerVolumeSpecName "kube-api-access-gxzpb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:22:05 crc kubenswrapper[5099]: I0122 14:22:05.271381 5099 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gxzpb\" (UniqueName: \"kubernetes.io/projected/26104a09-24a6-465b-a681-9bade15872bd-kube-api-access-gxzpb\") on node \"crc\" DevicePath \"\"" Jan 22 14:22:05 crc kubenswrapper[5099]: I0122 14:22:05.685053 5099 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484862-dbwtj" event={"ID":"26104a09-24a6-465b-a681-9bade15872bd","Type":"ContainerDied","Data":"c248621f550d5786081e0c8ca56edc784649a2f8b241e1d0a40aa639e865e36b"} Jan 22 14:22:05 crc kubenswrapper[5099]: I0122 14:22:05.685203 5099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c248621f550d5786081e0c8ca56edc784649a2f8b241e1d0a40aa639e865e36b" Jan 22 14:22:05 crc kubenswrapper[5099]: I0122 14:22:05.685373 5099 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484862-dbwtj"