Jan 26 00:09:02 crc systemd[1]: Starting Kubernetes Kubelet... Jan 26 00:09:03 crc kubenswrapper[5110]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 00:09:03 crc kubenswrapper[5110]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 26 00:09:03 crc kubenswrapper[5110]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 00:09:03 crc kubenswrapper[5110]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 00:09:03 crc kubenswrapper[5110]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 26 00:09:03 crc kubenswrapper[5110]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.031530 5110 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037508 5110 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037577 5110 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037587 5110 feature_gate.go:328] unrecognized feature gate: Example2 Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037595 5110 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037603 5110 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037612 5110 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037620 5110 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037627 5110 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037636 5110 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037644 5110 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037652 5110 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037663 5110 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037670 5110 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037677 5110 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037684 5110 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037691 5110 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037698 5110 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037709 5110 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037720 5110 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037760 5110 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037768 5110 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037776 5110 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037783 5110 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037817 5110 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037826 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037835 5110 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037842 5110 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037849 5110 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037856 5110 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037863 5110 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037871 5110 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037879 5110 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037886 5110 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037893 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037900 5110 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037907 5110 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037914 5110 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037921 5110 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037928 5110 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037936 5110 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037943 5110 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037950 5110 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037959 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037966 5110 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037974 5110 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037981 5110 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037987 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.037995 5110 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.038002 5110 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.038009 5110 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.038015 5110 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.038023 5110 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.038031 5110 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.038038 5110 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.038044 5110 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.038051 5110 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.038058 5110 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.038065 5110 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.038072 5110 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.038086 5110 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.038093 5110 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.038099 5110 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.038107 5110 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.038114 5110 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.038121 5110 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.038128 5110 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.038135 5110 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.038142 5110 feature_gate.go:328] unrecognized feature gate: Example Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.038149 5110 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.038158 5110 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.038165 5110 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.038171 5110 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.038179 5110 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.038187 5110 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.038195 5110 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.038203 5110 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.038210 5110 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.038217 5110 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.038227 5110 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.038235 5110 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.038242 5110 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.038249 5110 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.038256 5110 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.038266 5110 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.038274 5110 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.038284 5110 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039310 5110 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039325 5110 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039333 5110 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039341 5110 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039349 5110 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039360 5110 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039367 5110 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039375 5110 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039383 5110 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039390 5110 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039397 5110 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039404 5110 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039412 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039420 5110 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039427 5110 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039434 5110 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039441 5110 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039448 5110 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039455 5110 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039462 5110 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039469 5110 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039476 5110 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039484 5110 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039491 5110 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039501 5110 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039509 5110 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039516 5110 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039522 5110 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039529 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039538 5110 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039545 5110 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039552 5110 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039558 5110 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039565 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039573 5110 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039580 5110 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039586 5110 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039595 5110 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039602 5110 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039609 5110 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039617 5110 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039624 5110 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039631 5110 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039637 5110 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039644 5110 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039657 5110 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039667 5110 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039675 5110 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039684 5110 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039693 5110 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039701 5110 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039708 5110 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039715 5110 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039723 5110 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039729 5110 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039736 5110 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039743 5110 feature_gate.go:328] unrecognized feature gate: Example2 Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039751 5110 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039759 5110 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039766 5110 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039774 5110 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039783 5110 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039818 5110 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039825 5110 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039835 5110 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039842 5110 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039850 5110 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039857 5110 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039864 5110 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039872 5110 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039879 5110 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039886 5110 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039893 5110 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039900 5110 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039907 5110 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039914 5110 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039921 5110 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039928 5110 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039935 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039942 5110 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039949 5110 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039956 5110 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039963 5110 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039970 5110 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039977 5110 feature_gate.go:328] unrecognized feature gate: Example Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.039985 5110 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.040653 5110 flags.go:64] FLAG: --address="0.0.0.0" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.040688 5110 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.040706 5110 flags.go:64] FLAG: --anonymous-auth="true" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.040719 5110 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.040733 5110 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.040742 5110 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.040754 5110 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.040766 5110 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.040774 5110 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.040782 5110 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.040823 5110 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.040837 5110 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.040848 5110 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.040858 5110 flags.go:64] FLAG: --cgroup-root="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.040867 5110 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.040878 5110 flags.go:64] FLAG: --client-ca-file="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.040888 5110 flags.go:64] FLAG: --cloud-config="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.040897 5110 flags.go:64] FLAG: --cloud-provider="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.040906 5110 flags.go:64] FLAG: --cluster-dns="[]" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.040921 5110 flags.go:64] FLAG: --cluster-domain="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.040931 5110 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.040942 5110 flags.go:64] FLAG: --config-dir="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.040952 5110 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.040963 5110 flags.go:64] FLAG: --container-log-max-files="5" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.040976 5110 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.040984 5110 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.040992 5110 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041001 5110 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041009 5110 flags.go:64] FLAG: --contention-profiling="false" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041016 5110 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041024 5110 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041033 5110 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041041 5110 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041053 5110 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041061 5110 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041070 5110 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041077 5110 flags.go:64] FLAG: --enable-load-reader="false" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041088 5110 flags.go:64] FLAG: --enable-server="true" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041096 5110 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041106 5110 flags.go:64] FLAG: --event-burst="100" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041115 5110 flags.go:64] FLAG: --event-qps="50" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041123 5110 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041131 5110 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041139 5110 flags.go:64] FLAG: --eviction-hard="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041153 5110 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041163 5110 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041173 5110 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041184 5110 flags.go:64] FLAG: --eviction-soft="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041194 5110 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041204 5110 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041213 5110 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041223 5110 flags.go:64] FLAG: --experimental-mounter-path="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041232 5110 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041242 5110 flags.go:64] FLAG: --fail-swap-on="true" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041251 5110 flags.go:64] FLAG: --feature-gates="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041263 5110 flags.go:64] FLAG: --file-check-frequency="20s" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041273 5110 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041282 5110 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041292 5110 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041302 5110 flags.go:64] FLAG: --healthz-port="10248" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041312 5110 flags.go:64] FLAG: --help="false" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041321 5110 flags.go:64] FLAG: --hostname-override="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041330 5110 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041341 5110 flags.go:64] FLAG: --http-check-frequency="20s" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041351 5110 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041361 5110 flags.go:64] FLAG: --image-credential-provider-config="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041370 5110 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041379 5110 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041389 5110 flags.go:64] FLAG: --image-service-endpoint="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041399 5110 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041410 5110 flags.go:64] FLAG: --kube-api-burst="100" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041421 5110 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041432 5110 flags.go:64] FLAG: --kube-api-qps="50" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041442 5110 flags.go:64] FLAG: --kube-reserved="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041453 5110 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041469 5110 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041480 5110 flags.go:64] FLAG: --kubelet-cgroups="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041489 5110 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041497 5110 flags.go:64] FLAG: --lock-file="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041507 5110 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041517 5110 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041528 5110 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041546 5110 flags.go:64] FLAG: --log-json-split-stream="false" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041556 5110 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041566 5110 flags.go:64] FLAG: --log-text-split-stream="false" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041576 5110 flags.go:64] FLAG: --logging-format="text" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041584 5110 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041593 5110 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041601 5110 flags.go:64] FLAG: --manifest-url="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041610 5110 flags.go:64] FLAG: --manifest-url-header="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041622 5110 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041631 5110 flags.go:64] FLAG: --max-open-files="1000000" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041641 5110 flags.go:64] FLAG: --max-pods="110" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041650 5110 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041658 5110 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041666 5110 flags.go:64] FLAG: --memory-manager-policy="None" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041674 5110 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041682 5110 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041690 5110 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041700 5110 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041758 5110 flags.go:64] FLAG: --node-status-max-images="50" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041767 5110 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041776 5110 flags.go:64] FLAG: --oom-score-adj="-999" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041786 5110 flags.go:64] FLAG: --pod-cidr="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041829 5110 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041849 5110 flags.go:64] FLAG: --pod-manifest-path="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041857 5110 flags.go:64] FLAG: --pod-max-pids="-1" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041866 5110 flags.go:64] FLAG: --pods-per-core="0" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041879 5110 flags.go:64] FLAG: --port="10250" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041887 5110 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041895 5110 flags.go:64] FLAG: --provider-id="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041903 5110 flags.go:64] FLAG: --qos-reserved="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041911 5110 flags.go:64] FLAG: --read-only-port="10255" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041920 5110 flags.go:64] FLAG: --register-node="true" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041928 5110 flags.go:64] FLAG: --register-schedulable="true" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041936 5110 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041951 5110 flags.go:64] FLAG: --registry-burst="10" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041959 5110 flags.go:64] FLAG: --registry-qps="5" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041967 5110 flags.go:64] FLAG: --reserved-cpus="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041975 5110 flags.go:64] FLAG: --reserved-memory="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041988 5110 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.041996 5110 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.042005 5110 flags.go:64] FLAG: --rotate-certificates="false" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.042013 5110 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.042020 5110 flags.go:64] FLAG: --runonce="false" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.042029 5110 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.042037 5110 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.042045 5110 flags.go:64] FLAG: --seccomp-default="false" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.042052 5110 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.042060 5110 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.042069 5110 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.042077 5110 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.042085 5110 flags.go:64] FLAG: --storage-driver-password="root" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.042093 5110 flags.go:64] FLAG: --storage-driver-secure="false" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.042100 5110 flags.go:64] FLAG: --storage-driver-table="stats" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.042108 5110 flags.go:64] FLAG: --storage-driver-user="root" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.042119 5110 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.042127 5110 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.042135 5110 flags.go:64] FLAG: --system-cgroups="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.042143 5110 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.042158 5110 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.042167 5110 flags.go:64] FLAG: --tls-cert-file="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.042174 5110 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.042186 5110 flags.go:64] FLAG: --tls-min-version="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.042194 5110 flags.go:64] FLAG: --tls-private-key-file="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.042202 5110 flags.go:64] FLAG: --topology-manager-policy="none" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.042210 5110 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.042218 5110 flags.go:64] FLAG: --topology-manager-scope="container" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.042226 5110 flags.go:64] FLAG: --v="2" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.042236 5110 flags.go:64] FLAG: --version="false" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.042247 5110 flags.go:64] FLAG: --vmodule="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.042258 5110 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.042270 5110 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042498 5110 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042509 5110 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042517 5110 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042525 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042533 5110 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042541 5110 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042549 5110 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042556 5110 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042566 5110 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042576 5110 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042584 5110 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042592 5110 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042599 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042607 5110 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042614 5110 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042621 5110 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042630 5110 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042638 5110 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042645 5110 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042657 5110 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042664 5110 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042671 5110 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042679 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042686 5110 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042693 5110 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042700 5110 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042708 5110 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042715 5110 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042722 5110 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042729 5110 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042737 5110 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042745 5110 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042752 5110 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042760 5110 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042767 5110 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042774 5110 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042781 5110 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042788 5110 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042832 5110 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042841 5110 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042851 5110 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042858 5110 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042865 5110 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042873 5110 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042880 5110 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042887 5110 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042897 5110 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042907 5110 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042914 5110 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042923 5110 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042930 5110 feature_gate.go:328] unrecognized feature gate: Example2 Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042942 5110 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042949 5110 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042956 5110 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042963 5110 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042970 5110 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042977 5110 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042984 5110 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042991 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.042999 5110 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.043006 5110 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.043014 5110 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.043021 5110 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.043028 5110 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.043035 5110 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.043042 5110 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.043049 5110 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.043057 5110 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.043063 5110 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.043070 5110 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.043077 5110 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.043085 5110 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.043092 5110 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.043099 5110 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.043106 5110 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.043113 5110 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.043120 5110 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.043127 5110 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.043135 5110 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.043142 5110 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.043149 5110 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.043156 5110 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.043165 5110 feature_gate.go:328] unrecognized feature gate: Example Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.043174 5110 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.043182 5110 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.043189 5110 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.043393 5110 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.055538 5110 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.055595 5110 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.055703 5110 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.055715 5110 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.055724 5110 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.055733 5110 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.055741 5110 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.055749 5110 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.055756 5110 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.055763 5110 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.055771 5110 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.055778 5110 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.055786 5110 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.055825 5110 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.055835 5110 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.055845 5110 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.055855 5110 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.055864 5110 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.055874 5110 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.055883 5110 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.055892 5110 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.055901 5110 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.055909 5110 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.055917 5110 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.055923 5110 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.055930 5110 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.055937 5110 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.055947 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.055955 5110 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.055962 5110 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.055969 5110 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.055976 5110 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.055983 5110 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.055993 5110 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056000 5110 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056007 5110 feature_gate.go:328] unrecognized feature gate: Example Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056015 5110 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056025 5110 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056032 5110 feature_gate.go:328] unrecognized feature gate: Example2 Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056039 5110 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056046 5110 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056053 5110 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056060 5110 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056068 5110 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056075 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056082 5110 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056089 5110 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056096 5110 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056103 5110 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056110 5110 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056117 5110 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056124 5110 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056132 5110 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056142 5110 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056152 5110 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056160 5110 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056169 5110 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056176 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056183 5110 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056190 5110 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056199 5110 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056206 5110 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056213 5110 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056220 5110 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056230 5110 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056240 5110 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056248 5110 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056256 5110 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056264 5110 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056271 5110 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056279 5110 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056286 5110 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056293 5110 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056301 5110 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056308 5110 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056315 5110 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056322 5110 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056329 5110 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056337 5110 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056344 5110 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056351 5110 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056358 5110 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056366 5110 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056374 5110 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056381 5110 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056388 5110 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056395 5110 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056401 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.056416 5110 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056649 5110 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056661 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056669 5110 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056676 5110 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056686 5110 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056694 5110 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056702 5110 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056709 5110 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056717 5110 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056724 5110 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056732 5110 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056739 5110 feature_gate.go:328] unrecognized feature gate: Example Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056747 5110 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056754 5110 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056761 5110 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056769 5110 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056779 5110 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056788 5110 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056827 5110 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056838 5110 feature_gate.go:328] unrecognized feature gate: Example2 Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056848 5110 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056857 5110 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056864 5110 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056872 5110 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056880 5110 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056888 5110 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056895 5110 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056902 5110 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056909 5110 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056916 5110 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056924 5110 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056931 5110 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056938 5110 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056945 5110 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056952 5110 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056959 5110 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056966 5110 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056974 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056982 5110 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056989 5110 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.056998 5110 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057005 5110 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057012 5110 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057020 5110 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057026 5110 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057033 5110 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057041 5110 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057048 5110 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057055 5110 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057062 5110 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057069 5110 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057076 5110 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057084 5110 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057091 5110 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057098 5110 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057105 5110 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057113 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057120 5110 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057128 5110 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057136 5110 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057144 5110 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057337 5110 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057346 5110 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057355 5110 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057362 5110 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057370 5110 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057377 5110 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057384 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057391 5110 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057398 5110 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057407 5110 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057414 5110 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057422 5110 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057429 5110 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057436 5110 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057443 5110 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057452 5110 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057460 5110 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057467 5110 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057474 5110 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057481 5110 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057489 5110 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057497 5110 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057504 5110 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057513 5110 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.057519 5110 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.057533 5110 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.058228 5110 server.go:962] "Client rotation is on, will bootstrap in background" Jan 26 00:09:03 crc kubenswrapper[5110]: E0126 00:09:03.062294 5110 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.067045 5110 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.067252 5110 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.068274 5110 server.go:1019] "Starting client certificate rotation" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.068552 5110 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.068627 5110 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.090553 5110 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 00:09:03 crc kubenswrapper[5110]: E0126 00:09:03.093035 5110 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.093504 5110 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.108498 5110 log.go:25] "Validated CRI v1 runtime API" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.135166 5110 log.go:25] "Validated CRI v1 image API" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.137293 5110 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.140384 5110 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2026-01-26-00-02-36-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.140433 5110 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:44 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.166297 5110 manager.go:217] Machine: {Timestamp:2026-01-26 00:09:03.164621725 +0000 UTC m=+0.393520374 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33649930240 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:8b97abb7-24be-4a3b-9f16-cd27402370ca BootID:649ae373-4a50-43e7-bb88-2a6949129be7 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824963072 Type:vfs Inodes:4107657 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824967168 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:44 Capacity:1073741824 Type:vfs Inodes:4107657 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:3a:68:a3 Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:3a:68:a3 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:d3:c6:5a Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:b1:f7:00 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:37:ac:c5 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:6d:4d:fd Speed:-1 Mtu:1496} {Name:eth10 MacAddress:4a:ba:de:f9:fb:74 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:6a:1f:ff:2f:b2:40 Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649930240 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.166734 5110 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.166895 5110 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.180098 5110 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.180172 5110 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.180426 5110 topology_manager.go:138] "Creating topology manager with none policy" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.180440 5110 container_manager_linux.go:306] "Creating device plugin manager" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.180473 5110 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.180973 5110 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.181445 5110 state_mem.go:36] "Initialized new in-memory state store" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.181666 5110 server.go:1267] "Using root directory" path="/var/lib/kubelet" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.182421 5110 kubelet.go:491] "Attempting to sync node with API server" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.182453 5110 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.182478 5110 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.182499 5110 kubelet.go:397] "Adding apiserver pod source" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.182521 5110 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 26 00:09:03 crc kubenswrapper[5110]: E0126 00:09:03.184987 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.185104 5110 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.185127 5110 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Jan 26 00:09:03 crc kubenswrapper[5110]: E0126 00:09:03.185096 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.186378 5110 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.186393 5110 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.187846 5110 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.188167 5110 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.188668 5110 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.189207 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.189237 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.189245 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.189255 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.189265 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.189275 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.189298 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.189306 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.189318 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.189333 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.189348 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.189483 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.194359 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.194419 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.195579 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.144:6443: connect: connection refused Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.218101 5110 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.218215 5110 server.go:1295] "Started kubelet" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.219025 5110 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.219029 5110 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.219148 5110 server_v1.go:47] "podresources" method="list" useActivePods=true Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.219626 5110 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 26 00:09:03 crc systemd[1]: Started Kubernetes Kubelet. Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.220908 5110 server.go:317] "Adding debug handlers to kubelet server" Jan 26 00:09:03 crc kubenswrapper[5110]: E0126 00:09:03.220615 5110 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.144:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e1f5299043f87 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:03.218155399 +0000 UTC m=+0.447054008,LastTimestamp:2026-01-26 00:09:03.218155399 +0000 UTC m=+0.447054008,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.223696 5110 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.223784 5110 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.224205 5110 volume_manager.go:295] "The desired_state_of_world populator starts" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.224281 5110 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.224346 5110 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 26 00:09:03 crc kubenswrapper[5110]: E0126 00:09:03.224354 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:03 crc kubenswrapper[5110]: E0126 00:09:03.224575 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="200ms" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.225292 5110 factory.go:55] Registering systemd factory Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.225348 5110 factory.go:223] Registration of the systemd container factory successfully Jan 26 00:09:03 crc kubenswrapper[5110]: E0126 00:09:03.225550 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.227808 5110 factory.go:153] Registering CRI-O factory Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.227831 5110 factory.go:223] Registration of the crio container factory successfully Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.227887 5110 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.227912 5110 factory.go:103] Registering Raw factory Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.227931 5110 manager.go:1196] Started watching for new ooms in manager Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.228677 5110 manager.go:319] Starting recovery of all containers Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.277504 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.277573 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.277613 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.277631 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.277644 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.277656 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.277670 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.277686 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.277701 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.277716 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.277730 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.277747 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.277765 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.277780 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.277883 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.277903 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.277915 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.277930 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.277944 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.277958 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.277971 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.277984 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.277997 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278010 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278027 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278042 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278054 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278068 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278087 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278101 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278113 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278160 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278172 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278187 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278201 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278213 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278256 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278270 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278282 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278297 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278308 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278320 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278333 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278347 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278360 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278371 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278382 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278396 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278411 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278422 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278434 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278446 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278459 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278470 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278486 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278501 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278542 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278555 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278570 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278584 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278598 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278614 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278623 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278634 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278646 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278658 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278670 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278683 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278695 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278708 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278720 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278731 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278743 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278755 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278771 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278785 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278814 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278829 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278842 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278859 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278872 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278882 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278894 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.278933 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.279747 5110 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.279775 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.279788 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.279913 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.279924 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.279936 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.279948 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.279962 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.279971 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.279982 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.279996 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280007 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280017 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280027 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280040 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280052 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280061 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280073 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280084 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280095 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280104 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280115 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280127 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280138 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280150 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280161 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280174 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280185 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280195 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280220 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280232 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280243 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280254 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280264 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280276 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280290 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280301 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280313 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280325 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280337 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280349 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280360 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280370 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280380 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280390 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280402 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280412 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280422 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280431 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280442 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280451 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280460 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280469 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280479 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280493 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280504 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280513 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280523 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280534 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280545 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280554 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280564 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280574 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280587 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280598 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280607 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280617 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280629 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280640 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280651 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280676 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280688 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280701 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280713 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280723 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280735 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280744 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280754 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280764 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280775 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280789 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280821 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280834 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280848 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280864 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280879 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280901 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280914 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280925 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280937 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280951 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280967 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280982 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.280997 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281011 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281021 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281032 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281045 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281056 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281068 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281349 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281371 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281391 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281407 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281422 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281438 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281452 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281470 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281487 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281504 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281518 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281531 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281543 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281556 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281568 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281581 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281623 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281635 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281651 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281666 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281677 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281689 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281701 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281712 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281726 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281738 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281785 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281819 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281832 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281841 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281851 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281863 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281874 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281887 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281901 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281912 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281923 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281933 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281943 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281954 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.281965 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.282005 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.282018 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.282027 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.282037 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.282045 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.282056 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.282067 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.282079 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.282094 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.282115 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.282131 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.282143 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.282154 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.282169 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.282182 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.282195 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.282209 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.282222 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.282234 5110 reconstruct.go:97] "Volume reconstruction finished" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.282242 5110 reconciler.go:26] "Reconciler: start to sync state" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.283410 5110 manager.go:324] Recovery completed Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.297413 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.301354 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.301414 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.301434 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.302874 5110 cpu_manager.go:222] "Starting CPU manager" policy="none" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.302891 5110 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.302918 5110 state_mem.go:36] "Initialized new in-memory state store" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.306556 5110 policy_none.go:49] "None policy: Start" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.306575 5110 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.306588 5110 state_mem.go:35] "Initializing new in-memory state store" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.313235 5110 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.315670 5110 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.315741 5110 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.315853 5110 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.315867 5110 kubelet.go:2451] "Starting kubelet main sync loop" Jan 26 00:09:03 crc kubenswrapper[5110]: E0126 00:09:03.315916 5110 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 26 00:09:03 crc kubenswrapper[5110]: E0126 00:09:03.317114 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 26 00:09:03 crc kubenswrapper[5110]: E0126 00:09:03.324437 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.352250 5110 manager.go:341] "Starting Device Plugin manager" Jan 26 00:09:03 crc kubenswrapper[5110]: E0126 00:09:03.353868 5110 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.353899 5110 server.go:85] "Starting device plugin registration server" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.354378 5110 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.354393 5110 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.354942 5110 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.355027 5110 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.355051 5110 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 26 00:09:03 crc kubenswrapper[5110]: E0126 00:09:03.358375 5110 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Jan 26 00:09:03 crc kubenswrapper[5110]: E0126 00:09:03.358456 5110 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.416410 5110 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.416647 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.417770 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.417872 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.417887 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.418901 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.419027 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.419071 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.419547 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.419582 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.419594 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.420205 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.420231 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.420241 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.420506 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.420546 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.420564 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.421294 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.421321 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.421329 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.421743 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.421813 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.421827 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.422149 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.422205 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.422209 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.422746 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.422774 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.422785 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.423321 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.423346 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.423356 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.423644 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.423703 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.423742 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.424300 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.424322 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.424329 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.424363 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.424374 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.424387 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:03 crc kubenswrapper[5110]: E0126 00:09:03.425405 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="400ms" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.425559 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.425603 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.426131 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.426177 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.426187 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:03 crc kubenswrapper[5110]: E0126 00:09:03.451979 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.455510 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.456702 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.456769 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.456787 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.456846 5110 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:09:03 crc kubenswrapper[5110]: E0126 00:09:03.457450 5110 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.144:6443: connect: connection refused" node="crc" Jan 26 00:09:03 crc kubenswrapper[5110]: E0126 00:09:03.458675 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:03 crc kubenswrapper[5110]: E0126 00:09:03.484207 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.485365 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.485423 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.485452 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.485507 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.485531 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.485571 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.485610 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.485637 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.485725 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.485817 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.485851 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.485878 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.485901 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.485922 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.485948 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.486014 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.486048 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.486068 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.486084 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.486102 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.486134 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.486150 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.486174 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.486280 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.486292 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.486309 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.486345 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.486356 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.486360 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.486770 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: E0126 00:09:03.505705 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:03 crc kubenswrapper[5110]: E0126 00:09:03.511074 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.587697 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.587776 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.587822 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.587842 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.587862 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.587879 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.587881 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.587959 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.587895 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.588012 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.588023 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.588040 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.588058 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.588066 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.588091 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.588117 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.588122 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.588335 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.588347 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.588362 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.588378 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.588389 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.588408 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.588413 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.588440 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.588444 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.588461 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.588478 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.588510 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.588535 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.588563 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.588616 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.658614 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.659748 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.659790 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.659822 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.659852 5110 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:09:03 crc kubenswrapper[5110]: E0126 00:09:03.660472 5110 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.144:6443: connect: connection refused" node="crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.753099 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.759711 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.777252 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-2aea123e23ac2089f17800ff1bd83300e14c34f7b30ae3acda7967f65e722642 WatchSource:0}: Error finding container 2aea123e23ac2089f17800ff1bd83300e14c34f7b30ae3acda7967f65e722642: Status 404 returned error can't find the container with id 2aea123e23ac2089f17800ff1bd83300e14c34f7b30ae3acda7967f65e722642 Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.779618 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-b795b1336b8dc2acb22be4cfbc8fd8e3eecd64ead5dc244d14a8e0e2b20d757c WatchSource:0}: Error finding container b795b1336b8dc2acb22be4cfbc8fd8e3eecd64ead5dc244d14a8e0e2b20d757c: Status 404 returned error can't find the container with id b795b1336b8dc2acb22be4cfbc8fd8e3eecd64ead5dc244d14a8e0e2b20d757c Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.782179 5110 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.784527 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.803338 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-b3e6c89e2a8049e41a8777f8cc8fdd9946cc6761e80f8d17a704b881731963c0 WatchSource:0}: Error finding container b3e6c89e2a8049e41a8777f8cc8fdd9946cc6761e80f8d17a704b881731963c0: Status 404 returned error can't find the container with id b3e6c89e2a8049e41a8777f8cc8fdd9946cc6761e80f8d17a704b881731963c0 Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.806386 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: I0126 00:09:03.811399 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:09:03 crc kubenswrapper[5110]: E0126 00:09:03.826598 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="800ms" Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.840049 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-513ccdd6c3393adb676232a3a264796bb248225838de3ab4a36ade950c3958cd WatchSource:0}: Error finding container 513ccdd6c3393adb676232a3a264796bb248225838de3ab4a36ade950c3958cd: Status 404 returned error can't find the container with id 513ccdd6c3393adb676232a3a264796bb248225838de3ab4a36ade950c3958cd Jan 26 00:09:03 crc kubenswrapper[5110]: W0126 00:09:03.853867 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b638b8f4bb0070e40528db779baf6a2.slice/crio-decabefb65c56bbd1c7aee8c89b49cedd22de1b12e0bc78127d4377244ef890a WatchSource:0}: Error finding container decabefb65c56bbd1c7aee8c89b49cedd22de1b12e0bc78127d4377244ef890a: Status 404 returned error can't find the container with id decabefb65c56bbd1c7aee8c89b49cedd22de1b12e0bc78127d4377244ef890a Jan 26 00:09:04 crc kubenswrapper[5110]: I0126 00:09:04.060648 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:04 crc kubenswrapper[5110]: I0126 00:09:04.063530 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:04 crc kubenswrapper[5110]: I0126 00:09:04.063590 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:04 crc kubenswrapper[5110]: I0126 00:09:04.063607 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:04 crc kubenswrapper[5110]: I0126 00:09:04.063651 5110 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:09:04 crc kubenswrapper[5110]: E0126 00:09:04.064327 5110 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.144:6443: connect: connection refused" node="crc" Jan 26 00:09:04 crc kubenswrapper[5110]: I0126 00:09:04.196162 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.144:6443: connect: connection refused Jan 26 00:09:04 crc kubenswrapper[5110]: I0126 00:09:04.320890 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"decabefb65c56bbd1c7aee8c89b49cedd22de1b12e0bc78127d4377244ef890a"} Jan 26 00:09:04 crc kubenswrapper[5110]: I0126 00:09:04.322110 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"513ccdd6c3393adb676232a3a264796bb248225838de3ab4a36ade950c3958cd"} Jan 26 00:09:04 crc kubenswrapper[5110]: I0126 00:09:04.323311 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"b3e6c89e2a8049e41a8777f8cc8fdd9946cc6761e80f8d17a704b881731963c0"} Jan 26 00:09:04 crc kubenswrapper[5110]: I0126 00:09:04.324232 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"b795b1336b8dc2acb22be4cfbc8fd8e3eecd64ead5dc244d14a8e0e2b20d757c"} Jan 26 00:09:04 crc kubenswrapper[5110]: E0126 00:09:04.325249 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 26 00:09:04 crc kubenswrapper[5110]: I0126 00:09:04.326411 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"2aea123e23ac2089f17800ff1bd83300e14c34f7b30ae3acda7967f65e722642"} Jan 26 00:09:04 crc kubenswrapper[5110]: E0126 00:09:04.502242 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 26 00:09:04 crc kubenswrapper[5110]: E0126 00:09:04.628229 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="1.6s" Jan 26 00:09:04 crc kubenswrapper[5110]: E0126 00:09:04.665093 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 26 00:09:04 crc kubenswrapper[5110]: E0126 00:09:04.826444 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 26 00:09:04 crc kubenswrapper[5110]: I0126 00:09:04.864941 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:04 crc kubenswrapper[5110]: I0126 00:09:04.867660 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:04 crc kubenswrapper[5110]: I0126 00:09:04.867737 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:04 crc kubenswrapper[5110]: I0126 00:09:04.867753 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:04 crc kubenswrapper[5110]: I0126 00:09:04.867808 5110 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:09:04 crc kubenswrapper[5110]: E0126 00:09:04.868529 5110 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.144:6443: connect: connection refused" node="crc" Jan 26 00:09:05 crc kubenswrapper[5110]: I0126 00:09:05.196543 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.144:6443: connect: connection refused Jan 26 00:09:05 crc kubenswrapper[5110]: I0126 00:09:05.224505 5110 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 26 00:09:05 crc kubenswrapper[5110]: E0126 00:09:05.225591 5110 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 26 00:09:05 crc kubenswrapper[5110]: I0126 00:09:05.330452 5110 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="46fb6b4c2b5179db0acdc17fca0fe367346d5e4b61798c9054f4af190e4a8701" exitCode=0 Jan 26 00:09:05 crc kubenswrapper[5110]: I0126 00:09:05.330571 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"46fb6b4c2b5179db0acdc17fca0fe367346d5e4b61798c9054f4af190e4a8701"} Jan 26 00:09:05 crc kubenswrapper[5110]: I0126 00:09:05.330772 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:05 crc kubenswrapper[5110]: I0126 00:09:05.334941 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:05 crc kubenswrapper[5110]: I0126 00:09:05.334988 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:05 crc kubenswrapper[5110]: I0126 00:09:05.335007 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:05 crc kubenswrapper[5110]: E0126 00:09:05.335288 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:05 crc kubenswrapper[5110]: I0126 00:09:05.343218 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"201fbacf5107a565f88c0b909a2d52c716cda4ea473eb26d2c613fe248ecf434"} Jan 26 00:09:05 crc kubenswrapper[5110]: I0126 00:09:05.343280 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"4c060a49b2c9f19914a87af1d9e1860fcf614171795c22fcd1fb629d84d3df5a"} Jan 26 00:09:05 crc kubenswrapper[5110]: I0126 00:09:05.343294 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"74a99588daa785243f933c28a11ac41cbdab81eb6fa291c097f633a57d2e90df"} Jan 26 00:09:05 crc kubenswrapper[5110]: I0126 00:09:05.343307 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"4c0ec4f07f5bc1820f2ebd1ad017e35f029e5b8166971c3676b9f348147456cc"} Jan 26 00:09:05 crc kubenswrapper[5110]: I0126 00:09:05.343370 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:05 crc kubenswrapper[5110]: I0126 00:09:05.344702 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:05 crc kubenswrapper[5110]: I0126 00:09:05.344735 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:05 crc kubenswrapper[5110]: I0126 00:09:05.344753 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:05 crc kubenswrapper[5110]: E0126 00:09:05.345044 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:05 crc kubenswrapper[5110]: I0126 00:09:05.350168 5110 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="2fe6116a5b9da02a5b57aaf57f7bed21247934453d936503dd835d1eac1c23b6" exitCode=0 Jan 26 00:09:05 crc kubenswrapper[5110]: I0126 00:09:05.350276 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"2fe6116a5b9da02a5b57aaf57f7bed21247934453d936503dd835d1eac1c23b6"} Jan 26 00:09:05 crc kubenswrapper[5110]: I0126 00:09:05.350373 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:05 crc kubenswrapper[5110]: I0126 00:09:05.351731 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:05 crc kubenswrapper[5110]: I0126 00:09:05.351772 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:05 crc kubenswrapper[5110]: I0126 00:09:05.351784 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:05 crc kubenswrapper[5110]: E0126 00:09:05.352116 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:05 crc kubenswrapper[5110]: I0126 00:09:05.352682 5110 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="9556ace6d728cde3159c1c8e97e12787903a3279cba1610d592fe244eff5ae5a" exitCode=0 Jan 26 00:09:05 crc kubenswrapper[5110]: I0126 00:09:05.352775 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"9556ace6d728cde3159c1c8e97e12787903a3279cba1610d592fe244eff5ae5a"} Jan 26 00:09:05 crc kubenswrapper[5110]: I0126 00:09:05.352904 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:05 crc kubenswrapper[5110]: I0126 00:09:05.353972 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:05 crc kubenswrapper[5110]: I0126 00:09:05.354009 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:05 crc kubenswrapper[5110]: I0126 00:09:05.354026 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:05 crc kubenswrapper[5110]: E0126 00:09:05.354282 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:05 crc kubenswrapper[5110]: I0126 00:09:05.356110 5110 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="e8cd7582659a048bc238ce9753c82ac7555698b76159d5cf809cfd7eddfa8d7a" exitCode=0 Jan 26 00:09:05 crc kubenswrapper[5110]: I0126 00:09:05.356154 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"e8cd7582659a048bc238ce9753c82ac7555698b76159d5cf809cfd7eddfa8d7a"} Jan 26 00:09:05 crc kubenswrapper[5110]: I0126 00:09:05.356258 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:05 crc kubenswrapper[5110]: I0126 00:09:05.356463 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:05 crc kubenswrapper[5110]: I0126 00:09:05.357039 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:05 crc kubenswrapper[5110]: I0126 00:09:05.357082 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:05 crc kubenswrapper[5110]: I0126 00:09:05.357095 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:05 crc kubenswrapper[5110]: E0126 00:09:05.357389 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:05 crc kubenswrapper[5110]: I0126 00:09:05.358236 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:05 crc kubenswrapper[5110]: I0126 00:09:05.358283 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:05 crc kubenswrapper[5110]: I0126 00:09:05.358301 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:06 crc kubenswrapper[5110]: I0126 00:09:06.196459 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.144:6443: connect: connection refused Jan 26 00:09:06 crc kubenswrapper[5110]: E0126 00:09:06.229113 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="3.2s" Jan 26 00:09:06 crc kubenswrapper[5110]: I0126 00:09:06.360728 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"edc58dbb97e7a54abf71095739d515edea8205bf2c5e4008b22a69d40b83cb2f"} Jan 26 00:09:06 crc kubenswrapper[5110]: I0126 00:09:06.360784 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"44222dc6a7066064989d8823f6c0f556f4d4d8cac2f2b4ab1462cbb51db8ff07"} Jan 26 00:09:06 crc kubenswrapper[5110]: I0126 00:09:06.360809 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"a67e63f2b2d217c10ec50b836c9cca85c134aef230c27dbb1ae8139a6dddd72b"} Jan 26 00:09:06 crc kubenswrapper[5110]: I0126 00:09:06.362997 5110 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="8339480b8ee693e38bc6660f0b37482ff26d091553c39db9cf75f4f8b95f96fd" exitCode=0 Jan 26 00:09:06 crc kubenswrapper[5110]: I0126 00:09:06.363054 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"8339480b8ee693e38bc6660f0b37482ff26d091553c39db9cf75f4f8b95f96fd"} Jan 26 00:09:06 crc kubenswrapper[5110]: I0126 00:09:06.363245 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:06 crc kubenswrapper[5110]: I0126 00:09:06.363737 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:06 crc kubenswrapper[5110]: I0126 00:09:06.363761 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:06 crc kubenswrapper[5110]: I0126 00:09:06.363771 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:06 crc kubenswrapper[5110]: E0126 00:09:06.363969 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:06 crc kubenswrapper[5110]: I0126 00:09:06.366635 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"30ae138b7d197dd54752f6c812a2ea9e7f23f015060d7f1c186abfe872d810a9"} Jan 26 00:09:06 crc kubenswrapper[5110]: I0126 00:09:06.367206 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:06 crc kubenswrapper[5110]: I0126 00:09:06.371072 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:06 crc kubenswrapper[5110]: I0126 00:09:06.371100 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:06 crc kubenswrapper[5110]: I0126 00:09:06.371111 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:06 crc kubenswrapper[5110]: E0126 00:09:06.371261 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:06 crc kubenswrapper[5110]: I0126 00:09:06.373296 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:06 crc kubenswrapper[5110]: I0126 00:09:06.373532 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:06 crc kubenswrapper[5110]: I0126 00:09:06.373685 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"d3734fa9576160d905d72573d1984bf03448ada8b30a78ff574ce89cb894632d"} Jan 26 00:09:06 crc kubenswrapper[5110]: I0126 00:09:06.373708 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"c2eb152f744e6c7d629e5eb8590f1c2d570d8f370e9deede5c4436d336c754bc"} Jan 26 00:09:06 crc kubenswrapper[5110]: I0126 00:09:06.373719 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"86dfaec2435d8c668da9a0ae44fdc2a617089e161cd08dcdf2559f662c8f2b3e"} Jan 26 00:09:06 crc kubenswrapper[5110]: I0126 00:09:06.374507 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:06 crc kubenswrapper[5110]: I0126 00:09:06.374530 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:06 crc kubenswrapper[5110]: I0126 00:09:06.374541 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:06 crc kubenswrapper[5110]: E0126 00:09:06.374833 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:06 crc kubenswrapper[5110]: I0126 00:09:06.375191 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:06 crc kubenswrapper[5110]: I0126 00:09:06.375215 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:06 crc kubenswrapper[5110]: I0126 00:09:06.375225 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:06 crc kubenswrapper[5110]: E0126 00:09:06.375515 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:06 crc kubenswrapper[5110]: I0126 00:09:06.468957 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:06 crc kubenswrapper[5110]: I0126 00:09:06.470340 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:06 crc kubenswrapper[5110]: I0126 00:09:06.470392 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:06 crc kubenswrapper[5110]: I0126 00:09:06.470407 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:06 crc kubenswrapper[5110]: I0126 00:09:06.470439 5110 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:09:06 crc kubenswrapper[5110]: E0126 00:09:06.471089 5110 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.144:6443: connect: connection refused" node="crc" Jan 26 00:09:07 crc kubenswrapper[5110]: I0126 00:09:07.382732 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"bf3200eee85a5ab80dc6220259b72de8a5e82660bd84c7cf7ec0d7fa38396302"} Jan 26 00:09:07 crc kubenswrapper[5110]: I0126 00:09:07.383061 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"fc0c2347169345d3886bfc369c472fa0e2cf381c3b596a46b7de48d84e955504"} Jan 26 00:09:07 crc kubenswrapper[5110]: I0126 00:09:07.383194 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:07 crc kubenswrapper[5110]: I0126 00:09:07.384826 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:07 crc kubenswrapper[5110]: I0126 00:09:07.384887 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:07 crc kubenswrapper[5110]: I0126 00:09:07.384909 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:07 crc kubenswrapper[5110]: E0126 00:09:07.385253 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:07 crc kubenswrapper[5110]: I0126 00:09:07.386371 5110 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="fb3dc595d8f46c045ba87b7515cb2d4576bb2237750740db45c461f69d392c7a" exitCode=0 Jan 26 00:09:07 crc kubenswrapper[5110]: I0126 00:09:07.386532 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"fb3dc595d8f46c045ba87b7515cb2d4576bb2237750740db45c461f69d392c7a"} Jan 26 00:09:07 crc kubenswrapper[5110]: I0126 00:09:07.386564 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:07 crc kubenswrapper[5110]: I0126 00:09:07.386667 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:07 crc kubenswrapper[5110]: I0126 00:09:07.386697 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:07 crc kubenswrapper[5110]: I0126 00:09:07.387081 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:09:07 crc kubenswrapper[5110]: I0126 00:09:07.387525 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:07 crc kubenswrapper[5110]: I0126 00:09:07.387607 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:07 crc kubenswrapper[5110]: I0126 00:09:07.387630 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:07 crc kubenswrapper[5110]: I0126 00:09:07.387652 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:07 crc kubenswrapper[5110]: I0126 00:09:07.387695 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:07 crc kubenswrapper[5110]: I0126 00:09:07.387713 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:07 crc kubenswrapper[5110]: E0126 00:09:07.387932 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:07 crc kubenswrapper[5110]: E0126 00:09:07.388309 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:07 crc kubenswrapper[5110]: I0126 00:09:07.388588 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:07 crc kubenswrapper[5110]: I0126 00:09:07.388624 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:07 crc kubenswrapper[5110]: I0126 00:09:07.388642 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:07 crc kubenswrapper[5110]: E0126 00:09:07.389010 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:07 crc kubenswrapper[5110]: I0126 00:09:07.673512 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:08 crc kubenswrapper[5110]: I0126 00:09:08.393219 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"2277f081ca24900f20ca43d5a74890c20bee6e3ec8ad9e4309f66cf96678660e"} Jan 26 00:09:08 crc kubenswrapper[5110]: I0126 00:09:08.393319 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"7e90c7cb0f7f52fbb22c7c05dda374c6044ece3a6eb021c481437ea8f5de1298"} Jan 26 00:09:08 crc kubenswrapper[5110]: I0126 00:09:08.393334 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"62c4e498d0857df9b3c9615022c95ca7ce1497c476eb255d3275604912cf63f6"} Jan 26 00:09:08 crc kubenswrapper[5110]: I0126 00:09:08.393473 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:08 crc kubenswrapper[5110]: I0126 00:09:08.393487 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:08 crc kubenswrapper[5110]: I0126 00:09:08.393926 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:08 crc kubenswrapper[5110]: I0126 00:09:08.394169 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:08 crc kubenswrapper[5110]: I0126 00:09:08.394279 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:08 crc kubenswrapper[5110]: I0126 00:09:08.394364 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:08 crc kubenswrapper[5110]: I0126 00:09:08.394666 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:08 crc kubenswrapper[5110]: I0126 00:09:08.394766 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:08 crc kubenswrapper[5110]: I0126 00:09:08.394822 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:08 crc kubenswrapper[5110]: E0126 00:09:08.395174 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:08 crc kubenswrapper[5110]: E0126 00:09:08.395531 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:08 crc kubenswrapper[5110]: I0126 00:09:08.695325 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:08 crc kubenswrapper[5110]: I0126 00:09:08.695636 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:08 crc kubenswrapper[5110]: I0126 00:09:08.696809 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:08 crc kubenswrapper[5110]: I0126 00:09:08.696867 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:08 crc kubenswrapper[5110]: I0126 00:09:08.696883 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:08 crc kubenswrapper[5110]: E0126 00:09:08.697370 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:08 crc kubenswrapper[5110]: I0126 00:09:08.716292 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:09 crc kubenswrapper[5110]: I0126 00:09:09.282960 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:09 crc kubenswrapper[5110]: I0126 00:09:09.401568 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"f0a980f89ffe1688954fbe60c16faeaf282be70e3d5a3968f74311a125158488"} Jan 26 00:09:09 crc kubenswrapper[5110]: I0126 00:09:09.401642 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"5241883a28ef3e6af36e6b5a5cca557f4bdb4444cdb5ba3cd00b1e9f28243d6e"} Jan 26 00:09:09 crc kubenswrapper[5110]: I0126 00:09:09.401725 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:09 crc kubenswrapper[5110]: I0126 00:09:09.401742 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:09 crc kubenswrapper[5110]: I0126 00:09:09.401940 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:09 crc kubenswrapper[5110]: I0126 00:09:09.402503 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:09 crc kubenswrapper[5110]: I0126 00:09:09.402534 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:09 crc kubenswrapper[5110]: I0126 00:09:09.402549 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:09 crc kubenswrapper[5110]: I0126 00:09:09.402787 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:09 crc kubenswrapper[5110]: I0126 00:09:09.402853 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:09 crc kubenswrapper[5110]: I0126 00:09:09.402864 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:09 crc kubenswrapper[5110]: E0126 00:09:09.402986 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:09 crc kubenswrapper[5110]: E0126 00:09:09.403334 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:09 crc kubenswrapper[5110]: I0126 00:09:09.403481 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:09 crc kubenswrapper[5110]: I0126 00:09:09.403533 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:09 crc kubenswrapper[5110]: I0126 00:09:09.403553 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:09 crc kubenswrapper[5110]: E0126 00:09:09.403857 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:09 crc kubenswrapper[5110]: I0126 00:09:09.486295 5110 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 26 00:09:09 crc kubenswrapper[5110]: I0126 00:09:09.671782 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:09 crc kubenswrapper[5110]: I0126 00:09:09.673350 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:09 crc kubenswrapper[5110]: I0126 00:09:09.673416 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:09 crc kubenswrapper[5110]: I0126 00:09:09.673440 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:09 crc kubenswrapper[5110]: I0126 00:09:09.673482 5110 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:09:09 crc kubenswrapper[5110]: I0126 00:09:09.685577 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Jan 26 00:09:10 crc kubenswrapper[5110]: I0126 00:09:10.405145 5110 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 00:09:10 crc kubenswrapper[5110]: I0126 00:09:10.405210 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:10 crc kubenswrapper[5110]: I0126 00:09:10.405357 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:10 crc kubenswrapper[5110]: I0126 00:09:10.406068 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:10 crc kubenswrapper[5110]: I0126 00:09:10.406166 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:10 crc kubenswrapper[5110]: I0126 00:09:10.406180 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:10 crc kubenswrapper[5110]: E0126 00:09:10.406596 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:10 crc kubenswrapper[5110]: I0126 00:09:10.407120 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:10 crc kubenswrapper[5110]: I0126 00:09:10.407183 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:10 crc kubenswrapper[5110]: I0126 00:09:10.407202 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:10 crc kubenswrapper[5110]: E0126 00:09:10.407904 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:11 crc kubenswrapper[5110]: I0126 00:09:11.407338 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:11 crc kubenswrapper[5110]: I0126 00:09:11.408351 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:11 crc kubenswrapper[5110]: I0126 00:09:11.408396 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:11 crc kubenswrapper[5110]: I0126 00:09:11.408407 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:11 crc kubenswrapper[5110]: E0126 00:09:11.408771 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:11 crc kubenswrapper[5110]: I0126 00:09:11.762544 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:11 crc kubenswrapper[5110]: I0126 00:09:11.762861 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:11 crc kubenswrapper[5110]: I0126 00:09:11.763982 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:11 crc kubenswrapper[5110]: I0126 00:09:11.764029 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:11 crc kubenswrapper[5110]: I0126 00:09:11.764049 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:11 crc kubenswrapper[5110]: E0126 00:09:11.764517 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:11 crc kubenswrapper[5110]: I0126 00:09:11.899915 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:11 crc kubenswrapper[5110]: I0126 00:09:11.900185 5110 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 00:09:11 crc kubenswrapper[5110]: I0126 00:09:11.900243 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:11 crc kubenswrapper[5110]: I0126 00:09:11.901264 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:11 crc kubenswrapper[5110]: I0126 00:09:11.901311 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:11 crc kubenswrapper[5110]: I0126 00:09:11.901334 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:11 crc kubenswrapper[5110]: E0126 00:09:11.901854 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:12 crc kubenswrapper[5110]: I0126 00:09:12.557507 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:12 crc kubenswrapper[5110]: I0126 00:09:12.558179 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:12 crc kubenswrapper[5110]: I0126 00:09:12.559657 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:12 crc kubenswrapper[5110]: I0126 00:09:12.559740 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:12 crc kubenswrapper[5110]: I0126 00:09:12.559751 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:12 crc kubenswrapper[5110]: E0126 00:09:12.560213 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:13 crc kubenswrapper[5110]: E0126 00:09:13.358767 5110 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 00:09:14 crc kubenswrapper[5110]: I0126 00:09:14.900570 5110 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 00:09:14 crc kubenswrapper[5110]: I0126 00:09:14.900669 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 00:09:16 crc kubenswrapper[5110]: I0126 00:09:16.711920 5110 trace.go:236] Trace[1368144886]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 00:09:06.709) (total time: 10001ms): Jan 26 00:09:16 crc kubenswrapper[5110]: Trace[1368144886]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:09:16.711) Jan 26 00:09:16 crc kubenswrapper[5110]: Trace[1368144886]: [10.001947139s] [10.001947139s] END Jan 26 00:09:16 crc kubenswrapper[5110]: E0126 00:09:16.711970 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 26 00:09:16 crc kubenswrapper[5110]: I0126 00:09:16.758279 5110 trace.go:236] Trace[1057514489]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 00:09:06.757) (total time: 10001ms): Jan 26 00:09:16 crc kubenswrapper[5110]: Trace[1057514489]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:09:16.758) Jan 26 00:09:16 crc kubenswrapper[5110]: Trace[1057514489]: [10.001143146s] [10.001143146s] END Jan 26 00:09:16 crc kubenswrapper[5110]: E0126 00:09:16.758330 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 26 00:09:17 crc kubenswrapper[5110]: I0126 00:09:17.197582 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 26 00:09:17 crc kubenswrapper[5110]: I0126 00:09:17.592894 5110 trace.go:236] Trace[1684764110]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 00:09:07.589) (total time: 10002ms): Jan 26 00:09:17 crc kubenswrapper[5110]: Trace[1684764110]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (00:09:17.592) Jan 26 00:09:17 crc kubenswrapper[5110]: Trace[1684764110]: [10.002970835s] [10.002970835s] END Jan 26 00:09:17 crc kubenswrapper[5110]: E0126 00:09:17.592947 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 26 00:09:17 crc kubenswrapper[5110]: I0126 00:09:17.600371 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 26 00:09:17 crc kubenswrapper[5110]: I0126 00:09:17.600761 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:17 crc kubenswrapper[5110]: I0126 00:09:17.602012 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:17 crc kubenswrapper[5110]: I0126 00:09:17.602063 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:17 crc kubenswrapper[5110]: I0126 00:09:17.602076 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:17 crc kubenswrapper[5110]: E0126 00:09:17.602674 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:17 crc kubenswrapper[5110]: I0126 00:09:17.614915 5110 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 26 00:09:17 crc kubenswrapper[5110]: I0126 00:09:17.615013 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 26 00:09:17 crc kubenswrapper[5110]: I0126 00:09:17.626724 5110 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 26 00:09:17 crc kubenswrapper[5110]: I0126 00:09:17.626808 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 26 00:09:17 crc kubenswrapper[5110]: I0126 00:09:17.698466 5110 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Jan 26 00:09:17 crc kubenswrapper[5110]: I0126 00:09:17.698568 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 26 00:09:19 crc kubenswrapper[5110]: E0126 00:09:19.430557 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 26 00:09:21 crc kubenswrapper[5110]: E0126 00:09:21.475870 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 26 00:09:21 crc kubenswrapper[5110]: E0126 00:09:21.475881 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 26 00:09:22 crc kubenswrapper[5110]: I0126 00:09:22.565872 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:22 crc kubenswrapper[5110]: I0126 00:09:22.566814 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:22 crc kubenswrapper[5110]: I0126 00:09:22.568209 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:22 crc kubenswrapper[5110]: I0126 00:09:22.568279 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:22 crc kubenswrapper[5110]: I0126 00:09:22.568295 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.568898 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.616782 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f5299043f87 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:03.218155399 +0000 UTC m=+0.447054008,LastTimestamp:2026-01-26 00:09:03.218155399 +0000 UTC m=+0.447054008,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: I0126 00:09:22.617254 5110 trace.go:236] Trace[252499774]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 00:09:07.797) (total time: 14819ms): Jan 26 00:09:22 crc kubenswrapper[5110]: Trace[252499774]: ---"Objects listed" error:services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope 14819ms (00:09:22.616) Jan 26 00:09:22 crc kubenswrapper[5110]: Trace[252499774]: [14.81921964s] [14.81921964s] END Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.617487 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.623962 5110 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.623851 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f529dfa53cf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:03.301391311 +0000 UTC m=+0.530289920,LastTimestamp:2026-01-26 00:09:03.301391311 +0000 UTC m=+0.530289920,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: I0126 00:09:22.627040 5110 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.631676 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f529dfac763 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:03.301420899 +0000 UTC m=+0.530319508,LastTimestamp:2026-01-26 00:09:03.301420899 +0000 UTC m=+0.530319508,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.639874 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f529dfb1280 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:03.301440128 +0000 UTC m=+0.530338737,LastTimestamp:2026-01-26 00:09:03.301440128 +0000 UTC m=+0.530338737,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.646641 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f52a13a9177 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:03.355933047 +0000 UTC m=+0.584831666,LastTimestamp:2026-01-26 00:09:03.355933047 +0000 UTC m=+0.584831666,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.656771 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f529dfa53cf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f529dfa53cf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:03.301391311 +0000 UTC m=+0.530289920,LastTimestamp:2026-01-26 00:09:03.417836765 +0000 UTC m=+0.646735374,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.663062 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f529dfac763\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f529dfac763 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:03.301420899 +0000 UTC m=+0.530319508,LastTimestamp:2026-01-26 00:09:03.417880992 +0000 UTC m=+0.646779601,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.668285 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f529dfb1280\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f529dfb1280 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:03.301440128 +0000 UTC m=+0.530338737,LastTimestamp:2026-01-26 00:09:03.417892391 +0000 UTC m=+0.646791000,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.676987 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f529dfa53cf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f529dfa53cf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:03.301391311 +0000 UTC m=+0.530289920,LastTimestamp:2026-01-26 00:09:03.419564788 +0000 UTC m=+0.648463397,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: I0126 00:09:22.680067 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:22 crc kubenswrapper[5110]: I0126 00:09:22.680380 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:22 crc kubenswrapper[5110]: I0126 00:09:22.681575 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.681551 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f529dfac763\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f529dfac763 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:03.301420899 +0000 UTC m=+0.530319508,LastTimestamp:2026-01-26 00:09:03.419589326 +0000 UTC m=+0.648487935,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: I0126 00:09:22.681644 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:22 crc kubenswrapper[5110]: I0126 00:09:22.681741 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.682296 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:22 crc kubenswrapper[5110]: I0126 00:09:22.685788 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.686116 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f529dfb1280\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f529dfb1280 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:03.301440128 +0000 UTC m=+0.530338737,LastTimestamp:2026-01-26 00:09:03.419599265 +0000 UTC m=+0.648497874,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.689920 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f529dfa53cf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f529dfa53cf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:03.301391311 +0000 UTC m=+0.530289920,LastTimestamp:2026-01-26 00:09:03.420218303 +0000 UTC m=+0.649116912,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.694185 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f529dfac763\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f529dfac763 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:03.301420899 +0000 UTC m=+0.530319508,LastTimestamp:2026-01-26 00:09:03.420237222 +0000 UTC m=+0.649135831,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.700546 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f529dfb1280\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f529dfb1280 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:03.301440128 +0000 UTC m=+0.530338737,LastTimestamp:2026-01-26 00:09:03.420246121 +0000 UTC m=+0.649144720,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.705493 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f529dfa53cf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f529dfa53cf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:03.301391311 +0000 UTC m=+0.530289920,LastTimestamp:2026-01-26 00:09:03.421310179 +0000 UTC m=+0.650208788,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.710124 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f529dfac763\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f529dfac763 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:03.301420899 +0000 UTC m=+0.530319508,LastTimestamp:2026-01-26 00:09:03.421326138 +0000 UTC m=+0.650224737,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.715807 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f529dfb1280\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f529dfb1280 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:03.301440128 +0000 UTC m=+0.530338737,LastTimestamp:2026-01-26 00:09:03.421334498 +0000 UTC m=+0.650233107,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.731449 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f529dfa53cf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f529dfa53cf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:03.301391311 +0000 UTC m=+0.530289920,LastTimestamp:2026-01-26 00:09:03.421780387 +0000 UTC m=+0.650678996,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.737426 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f529dfac763\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f529dfac763 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:03.301420899 +0000 UTC m=+0.530319508,LastTimestamp:2026-01-26 00:09:03.421821425 +0000 UTC m=+0.650720024,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.742964 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f529dfb1280\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f529dfb1280 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:03.301440128 +0000 UTC m=+0.530338737,LastTimestamp:2026-01-26 00:09:03.421832084 +0000 UTC m=+0.650730693,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.748631 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f529dfa53cf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f529dfa53cf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:03.301391311 +0000 UTC m=+0.530289920,LastTimestamp:2026-01-26 00:09:03.422762751 +0000 UTC m=+0.651661360,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.753870 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f529dfac763\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f529dfac763 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:03.301420899 +0000 UTC m=+0.530319508,LastTimestamp:2026-01-26 00:09:03.42278082 +0000 UTC m=+0.651679429,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.758822 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f529dfb1280\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f529dfb1280 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:03.301440128 +0000 UTC m=+0.530338737,LastTimestamp:2026-01-26 00:09:03.422790099 +0000 UTC m=+0.651688708,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.763520 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f529dfa53cf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f529dfa53cf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:03.301391311 +0000 UTC m=+0.530289920,LastTimestamp:2026-01-26 00:09:03.423338082 +0000 UTC m=+0.652236691,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.769038 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f529dfac763\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f529dfac763 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:03.301420899 +0000 UTC m=+0.530319508,LastTimestamp:2026-01-26 00:09:03.423352641 +0000 UTC m=+0.652251240,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.776752 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188e1f52baa92f3e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:03.782612798 +0000 UTC m=+1.011511407,LastTimestamp:2026-01-26 00:09:03.782612798 +0000 UTC m=+1.011511407,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.782154 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f52baaeb950 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:03.782975824 +0000 UTC m=+1.011874453,LastTimestamp:2026-01-26 00:09:03.782975824 +0000 UTC m=+1.011874453,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.787334 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f52bc070caa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:03.805541546 +0000 UTC m=+1.034440165,LastTimestamp:2026-01-26 00:09:03.805541546 +0000 UTC m=+1.034440165,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.793600 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f52be63bde6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:03.845170662 +0000 UTC m=+1.074069271,LastTimestamp:2026-01-26 00:09:03.845170662 +0000 UTC m=+1.074069271,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.800989 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f52bf45afd7 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:03.859978199 +0000 UTC m=+1.088876808,LastTimestamp:2026-01-26 00:09:03.859978199 +0000 UTC m=+1.088876808,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.806877 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f52da92008e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:04.31796443 +0000 UTC m=+1.546863039,LastTimestamp:2026-01-26 00:09:04.31796443 +0000 UTC m=+1.546863039,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.811325 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f52da923463 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:04.317977699 +0000 UTC m=+1.546876308,LastTimestamp:2026-01-26 00:09:04.317977699 +0000 UTC m=+1.546876308,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.815843 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f52da92344f openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:04.317977679 +0000 UTC m=+1.546876288,LastTimestamp:2026-01-26 00:09:04.317977679 +0000 UTC m=+1.546876288,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.820140 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f52db02c755 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:04.325355349 +0000 UTC m=+1.554253958,LastTimestamp:2026-01-26 00:09:04.325355349 +0000 UTC m=+1.554253958,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.825367 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188e1f52db2cfd48 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:04.328121672 +0000 UTC m=+1.557020281,LastTimestamp:2026-01-26 00:09:04.328121672 +0000 UTC m=+1.557020281,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.830127 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f52db50d629 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:04.330470953 +0000 UTC m=+1.559369562,LastTimestamp:2026-01-26 00:09:04.330470953 +0000 UTC m=+1.559369562,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.834160 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f52db603da4 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:04.331480484 +0000 UTC m=+1.560379093,LastTimestamp:2026-01-26 00:09:04.331480484 +0000 UTC m=+1.560379093,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.838642 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f52db75b459 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:04.332887129 +0000 UTC m=+1.561785738,LastTimestamp:2026-01-26 00:09:04.332887129 +0000 UTC m=+1.561785738,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.843312 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f52dbad0895 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:04.336513173 +0000 UTC m=+1.565411782,LastTimestamp:2026-01-26 00:09:04.336513173 +0000 UTC m=+1.565411782,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.849110 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f52dbe1bfc3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:04.339967939 +0000 UTC m=+1.568866548,LastTimestamp:2026-01-26 00:09:04.339967939 +0000 UTC m=+1.568866548,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.856631 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188e1f52dbfb5e80 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:04.341646976 +0000 UTC m=+1.570545585,LastTimestamp:2026-01-26 00:09:04.341646976 +0000 UTC m=+1.570545585,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.863302 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f52ed8b3640 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:04.636286528 +0000 UTC m=+1.865185137,LastTimestamp:2026-01-26 00:09:04.636286528 +0000 UTC m=+1.865185137,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.869052 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f52ee1b604a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:04.645734474 +0000 UTC m=+1.874633083,LastTimestamp:2026-01-26 00:09:04.645734474 +0000 UTC m=+1.874633083,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.875250 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f52ee327085 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:04.647245957 +0000 UTC m=+1.876144566,LastTimestamp:2026-01-26 00:09:04.647245957 +0000 UTC m=+1.876144566,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.882716 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f530264df80 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:04.986095488 +0000 UTC m=+2.214994097,LastTimestamp:2026-01-26 00:09:04.986095488 +0000 UTC m=+2.214994097,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.891008 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f5302e20ea9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:04.994299561 +0000 UTC m=+2.223198170,LastTimestamp:2026-01-26 00:09:04.994299561 +0000 UTC m=+2.223198170,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.897093 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f5302fba583 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:04.995976579 +0000 UTC m=+2.224875188,LastTimestamp:2026-01-26 00:09:04.995976579 +0000 UTC m=+2.224875188,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.905592 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f5311b42fdb openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:05.242951643 +0000 UTC m=+2.471850252,LastTimestamp:2026-01-26 00:09:05.242951643 +0000 UTC m=+2.471850252,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.911905 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f53126d991c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:05.255102748 +0000 UTC m=+2.484001347,LastTimestamp:2026-01-26 00:09:05.255102748 +0000 UTC m=+2.484001347,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.917390 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f531760f5b7 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:05.338160567 +0000 UTC m=+2.567059236,LastTimestamp:2026-01-26 00:09:05.338160567 +0000 UTC m=+2.567059236,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.923451 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f531874e313 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:05.356243731 +0000 UTC m=+2.585142370,LastTimestamp:2026-01-26 00:09:05.356243731 +0000 UTC m=+2.585142370,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.933533 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f531894338d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:05.358295949 +0000 UTC m=+2.587194598,LastTimestamp:2026-01-26 00:09:05.358295949 +0000 UTC m=+2.587194598,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.937150 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188e1f5318bbb187 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:05.360884103 +0000 UTC m=+2.589782722,LastTimestamp:2026-01-26 00:09:05.360884103 +0000 UTC m=+2.589782722,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.939994 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f532d41db59 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:05.705220953 +0000 UTC m=+2.934119562,LastTimestamp:2026-01-26 00:09:05.705220953 +0000 UTC m=+2.934119562,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.943500 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188e1f532da7c261 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:05.711899233 +0000 UTC m=+2.940797842,LastTimestamp:2026-01-26 00:09:05.711899233 +0000 UTC m=+2.940797842,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.951319 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f532e198a64 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:05.719356004 +0000 UTC m=+2.948254613,LastTimestamp:2026-01-26 00:09:05.719356004 +0000 UTC m=+2.948254613,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.957068 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f532e30d3c6 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:05.720882118 +0000 UTC m=+2.949780727,LastTimestamp:2026-01-26 00:09:05.720882118 +0000 UTC m=+2.949780727,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.972956 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f532e402176 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:05.721885046 +0000 UTC m=+2.950783655,LastTimestamp:2026-01-26 00:09:05.721885046 +0000 UTC m=+2.950783655,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.981057 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f532e409283 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:05.721913987 +0000 UTC m=+2.950812596,LastTimestamp:2026-01-26 00:09:05.721913987 +0000 UTC m=+2.950812596,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.986690 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188e1f532eb5f17f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:05.729606015 +0000 UTC m=+2.958504624,LastTimestamp:2026-01-26 00:09:05.729606015 +0000 UTC m=+2.958504624,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.992611 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f532f368e75 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:05.738034805 +0000 UTC m=+2.966933414,LastTimestamp:2026-01-26 00:09:05.738034805 +0000 UTC m=+2.966933414,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:22 crc kubenswrapper[5110]: E0126 00:09:22.996980 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f532f4a8274 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:05.739342452 +0000 UTC m=+2.968241061,LastTimestamp:2026-01-26 00:09:05.739342452 +0000 UTC m=+2.968241061,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.003981 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f532f816908 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:05.742940424 +0000 UTC m=+2.971839063,LastTimestamp:2026-01-26 00:09:05.742940424 +0000 UTC m=+2.971839063,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.010068 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f533bdfbb22 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:05.950448418 +0000 UTC m=+3.179347027,LastTimestamp:2026-01-26 00:09:05.950448418 +0000 UTC m=+3.179347027,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.017719 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f533d3c104b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:05.973276747 +0000 UTC m=+3.202175356,LastTimestamp:2026-01-26 00:09:05.973276747 +0000 UTC m=+3.202175356,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.024991 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f533d55e02d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:05.974968365 +0000 UTC m=+3.203866974,LastTimestamp:2026-01-26 00:09:05.974968365 +0000 UTC m=+3.203866974,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.030979 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f533d61ee8d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:05.975758477 +0000 UTC m=+3.204657096,LastTimestamp:2026-01-26 00:09:05.975758477 +0000 UTC m=+3.204657096,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.038480 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f533e179f83 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:05.987665795 +0000 UTC m=+3.216564404,LastTimestamp:2026-01-26 00:09:05.987665795 +0000 UTC m=+3.216564404,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.044058 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f533e3b9318 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:05.990021912 +0000 UTC m=+3.218920521,LastTimestamp:2026-01-26 00:09:05.990021912 +0000 UTC m=+3.218920521,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.058878 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f534b06576e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:06.204637038 +0000 UTC m=+3.433535647,LastTimestamp:2026-01-26 00:09:06.204637038 +0000 UTC m=+3.433535647,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.064123 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f534bbda5f6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:06.21665023 +0000 UTC m=+3.445548839,LastTimestamp:2026-01-26 00:09:06.21665023 +0000 UTC m=+3.445548839,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.076638 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f534bcce186 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:06.217648518 +0000 UTC m=+3.446547127,LastTimestamp:2026-01-26 00:09:06.217648518 +0000 UTC m=+3.446547127,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.083129 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f534c4ab510 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:06.225894672 +0000 UTC m=+3.454793281,LastTimestamp:2026-01-26 00:09:06.225894672 +0000 UTC m=+3.454793281,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.089469 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f534c6194e5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:06.227393765 +0000 UTC m=+3.456292364,LastTimestamp:2026-01-26 00:09:06.227393765 +0000 UTC m=+3.456292364,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.097519 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f535495de68 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:06.365038184 +0000 UTC m=+3.593936793,LastTimestamp:2026-01-26 00:09:06.365038184 +0000 UTC m=+3.593936793,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.122579 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f535b7fa047 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:06.481020999 +0000 UTC m=+3.709919618,LastTimestamp:2026-01-26 00:09:06.481020999 +0000 UTC m=+3.709919618,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.134322 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f535d521321 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:06.511590177 +0000 UTC m=+3.740488786,LastTimestamp:2026-01-26 00:09:06.511590177 +0000 UTC m=+3.740488786,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.141031 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f535d6f6e58 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:06.513514072 +0000 UTC m=+3.742412681,LastTimestamp:2026-01-26 00:09:06.513514072 +0000 UTC m=+3.742412681,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.145941 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f5364804a25 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:06.632059429 +0000 UTC m=+3.860958038,LastTimestamp:2026-01-26 00:09:06.632059429 +0000 UTC m=+3.860958038,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.159147 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f53656affcc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:06.647441356 +0000 UTC m=+3.876339965,LastTimestamp:2026-01-26 00:09:06.647441356 +0000 UTC m=+3.876339965,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.164769 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f536c5f46f2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:06.76411365 +0000 UTC m=+3.993012259,LastTimestamp:2026-01-26 00:09:06.76411365 +0000 UTC m=+3.993012259,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.175053 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f536d5007f2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:06.779891698 +0000 UTC m=+4.008790307,LastTimestamp:2026-01-26 00:09:06.779891698 +0000 UTC m=+4.008790307,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.209244 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f5391a8a785 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:07.389679493 +0000 UTC m=+4.618578142,LastTimestamp:2026-01-26 00:09:07.389679493 +0000 UTC m=+4.618578142,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: I0126 00:09:23.215087 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.219109 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f53a5c59323 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:07.727119139 +0000 UTC m=+4.956017748,LastTimestamp:2026-01-26 00:09:07.727119139 +0000 UTC m=+4.956017748,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.226737 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f53a6e3a71c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:07.745867548 +0000 UTC m=+4.974766157,LastTimestamp:2026-01-26 00:09:07.745867548 +0000 UTC m=+4.974766157,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.231143 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f53a6fa2957 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:07.747342679 +0000 UTC m=+4.976241288,LastTimestamp:2026-01-26 00:09:07.747342679 +0000 UTC m=+4.976241288,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.235652 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.235735 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f53b80c8cd4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:08.033760468 +0000 UTC m=+5.262659077,LastTimestamp:2026-01-26 00:09:08.033760468 +0000 UTC m=+5.262659077,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.239490 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f53b98a4547 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:08.058776903 +0000 UTC m=+5.287675512,LastTimestamp:2026-01-26 00:09:08.058776903 +0000 UTC m=+5.287675512,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.243750 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f53b9a362a3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:08.060422819 +0000 UTC m=+5.289321428,LastTimestamp:2026-01-26 00:09:08.060422819 +0000 UTC m=+5.289321428,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.250933 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f53c9ea9c1b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:08.333526043 +0000 UTC m=+5.562424652,LastTimestamp:2026-01-26 00:09:08.333526043 +0000 UTC m=+5.562424652,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.252993 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f53cac37686 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:08.347737734 +0000 UTC m=+5.576636343,LastTimestamp:2026-01-26 00:09:08.347737734 +0000 UTC m=+5.576636343,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.259388 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f53cad8b423 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:08.349129763 +0000 UTC m=+5.578028382,LastTimestamp:2026-01-26 00:09:08.349129763 +0000 UTC m=+5.578028382,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.266404 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f53d94fcf20 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:08.59181648 +0000 UTC m=+5.820715089,LastTimestamp:2026-01-26 00:09:08.59181648 +0000 UTC m=+5.820715089,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.272546 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f53da367624 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:08.606932516 +0000 UTC m=+5.835831135,LastTimestamp:2026-01-26 00:09:08.606932516 +0000 UTC m=+5.835831135,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.277814 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f53da523ba3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:08.608752547 +0000 UTC m=+5.837651196,LastTimestamp:2026-01-26 00:09:08.608752547 +0000 UTC m=+5.837651196,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.285353 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f53e898cdd2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:08.848258514 +0000 UTC m=+6.077157123,LastTimestamp:2026-01-26 00:09:08.848258514 +0000 UTC m=+6.077157123,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.291465 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f53e99ce3e4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:08.865303524 +0000 UTC m=+6.094202133,LastTimestamp:2026-01-26 00:09:08.865303524 +0000 UTC m=+6.094202133,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.301187 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Jan 26 00:09:23 crc kubenswrapper[5110]: &Event{ObjectMeta:{kube-controller-manager-crc.188e1f555158adef openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 26 00:09:23 crc kubenswrapper[5110]: body: Jan 26 00:09:23 crc kubenswrapper[5110]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:14.900631023 +0000 UTC m=+12.129529632,LastTimestamp:2026-01-26 00:09:14.900631023 +0000 UTC m=+12.129529632,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 26 00:09:23 crc kubenswrapper[5110]: > Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.337258 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f55515a470c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:14.900735756 +0000 UTC m=+12.129634365,LastTimestamp:2026-01-26 00:09:14.900735756 +0000 UTC m=+12.129634365,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.351653 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 26 00:09:23 crc kubenswrapper[5110]: &Event{ObjectMeta:{kube-apiserver-crc.188e1f55f3223820 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 26 00:09:23 crc kubenswrapper[5110]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 26 00:09:23 crc kubenswrapper[5110]: Jan 26 00:09:23 crc kubenswrapper[5110]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:17.614970912 +0000 UTC m=+14.843869521,LastTimestamp:2026-01-26 00:09:17.614970912 +0000 UTC m=+14.843869521,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 26 00:09:23 crc kubenswrapper[5110]: > Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.360105 5110 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.363386 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f55f3233b46 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:17.615037254 +0000 UTC m=+14.843935863,LastTimestamp:2026-01-26 00:09:17.615037254 +0000 UTC m=+14.843935863,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.368023 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f55f3223820\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 26 00:09:23 crc kubenswrapper[5110]: &Event{ObjectMeta:{kube-apiserver-crc.188e1f55f3223820 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 26 00:09:23 crc kubenswrapper[5110]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 26 00:09:23 crc kubenswrapper[5110]: Jan 26 00:09:23 crc kubenswrapper[5110]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:17.614970912 +0000 UTC m=+14.843869521,LastTimestamp:2026-01-26 00:09:17.626771324 +0000 UTC m=+14.855669933,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 26 00:09:23 crc kubenswrapper[5110]: > Jan 26 00:09:23 crc kubenswrapper[5110]: I0126 00:09:23.370510 5110 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54728->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 26 00:09:23 crc kubenswrapper[5110]: I0126 00:09:23.370601 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54728->192.168.126.11:17697: read: connection reset by peer" Jan 26 00:09:23 crc kubenswrapper[5110]: I0126 00:09:23.370528 5110 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54716->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 26 00:09:23 crc kubenswrapper[5110]: I0126 00:09:23.370709 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54716->192.168.126.11:17697: read: connection reset by peer" Jan 26 00:09:23 crc kubenswrapper[5110]: I0126 00:09:23.370947 5110 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 26 00:09:23 crc kubenswrapper[5110]: I0126 00:09:23.371383 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.377187 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f55f3233b46\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f55f3233b46 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:17.615037254 +0000 UTC m=+14.843935863,LastTimestamp:2026-01-26 00:09:17.626831556 +0000 UTC m=+14.855730165,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.382723 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 26 00:09:23 crc kubenswrapper[5110]: &Event{ObjectMeta:{kube-apiserver-crc.188e1f55f81d3fa4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 26 00:09:23 crc kubenswrapper[5110]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Jan 26 00:09:23 crc kubenswrapper[5110]: Jan 26 00:09:23 crc kubenswrapper[5110]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:17.698531236 +0000 UTC m=+14.927429845,LastTimestamp:2026-01-26 00:09:17.698531236 +0000 UTC m=+14.927429845,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 26 00:09:23 crc kubenswrapper[5110]: > Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.387603 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f55f3233b46\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f55f3233b46 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:17.615037254 +0000 UTC m=+14.843935863,LastTimestamp:2026-01-26 00:09:17.698592748 +0000 UTC m=+14.927491357,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.393013 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 26 00:09:23 crc kubenswrapper[5110]: &Event{ObjectMeta:{kube-apiserver-crc.188e1f574a31b01f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:54728->192.168.126.11:17697: read: connection reset by peer Jan 26 00:09:23 crc kubenswrapper[5110]: body: Jan 26 00:09:23 crc kubenswrapper[5110]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:23.370569759 +0000 UTC m=+20.599468368,LastTimestamp:2026-01-26 00:09:23.370569759 +0000 UTC m=+20.599468368,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 26 00:09:23 crc kubenswrapper[5110]: > Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.397428 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f574a32a549 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54728->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:23.370632521 +0000 UTC m=+20.599531140,LastTimestamp:2026-01-26 00:09:23.370632521 +0000 UTC m=+20.599531140,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.401719 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 26 00:09:23 crc kubenswrapper[5110]: &Event{ObjectMeta:{kube-apiserver-crc.188e1f574a3315de openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:54716->192.168.126.11:17697: read: connection reset by peer Jan 26 00:09:23 crc kubenswrapper[5110]: body: Jan 26 00:09:23 crc kubenswrapper[5110]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:23.370661342 +0000 UTC m=+20.599559951,LastTimestamp:2026-01-26 00:09:23.370661342 +0000 UTC m=+20.599559951,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 26 00:09:23 crc kubenswrapper[5110]: > Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.407647 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f574a34557c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54716->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:23.370743164 +0000 UTC m=+20.599641773,LastTimestamp:2026-01-26 00:09:23.370743164 +0000 UTC m=+20.599641773,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.412644 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 26 00:09:23 crc kubenswrapper[5110]: &Event{ObjectMeta:{kube-apiserver-crc.188e1f574a3de1b1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Jan 26 00:09:23 crc kubenswrapper[5110]: body: Jan 26 00:09:23 crc kubenswrapper[5110]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:23.371368881 +0000 UTC m=+20.600267490,LastTimestamp:2026-01-26 00:09:23.371368881 +0000 UTC m=+20.600267490,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 26 00:09:23 crc kubenswrapper[5110]: > Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.417229 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f574a406a08 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:23.371534856 +0000 UTC m=+20.600433465,LastTimestamp:2026-01-26 00:09:23.371534856 +0000 UTC m=+20.600433465,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: I0126 00:09:23.445054 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 26 00:09:23 crc kubenswrapper[5110]: I0126 00:09:23.447577 5110 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="bf3200eee85a5ab80dc6220259b72de8a5e82660bd84c7cf7ec0d7fa38396302" exitCode=255 Jan 26 00:09:23 crc kubenswrapper[5110]: I0126 00:09:23.447693 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"bf3200eee85a5ab80dc6220259b72de8a5e82660bd84c7cf7ec0d7fa38396302"} Jan 26 00:09:23 crc kubenswrapper[5110]: I0126 00:09:23.447923 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:23 crc kubenswrapper[5110]: I0126 00:09:23.448683 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:23 crc kubenswrapper[5110]: I0126 00:09:23.448788 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:23 crc kubenswrapper[5110]: I0126 00:09:23.448877 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.449298 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:23 crc kubenswrapper[5110]: I0126 00:09:23.449632 5110 scope.go:117] "RemoveContainer" containerID="bf3200eee85a5ab80dc6220259b72de8a5e82660bd84c7cf7ec0d7fa38396302" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.457184 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f535d6f6e58\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f535d6f6e58 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:06.513514072 +0000 UTC m=+3.742412681,LastTimestamp:2026-01-26 00:09:23.451431337 +0000 UTC m=+20.680329946,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: I0126 00:09:23.458548 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:23 crc kubenswrapper[5110]: I0126 00:09:23.458851 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:23 crc kubenswrapper[5110]: I0126 00:09:23.459778 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:23 crc kubenswrapper[5110]: I0126 00:09:23.459858 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:23 crc kubenswrapper[5110]: I0126 00:09:23.459875 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.460367 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:23 crc kubenswrapper[5110]: I0126 00:09:23.464669 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.691252 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f536c5f46f2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f536c5f46f2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:06.76411365 +0000 UTC m=+3.993012259,LastTimestamp:2026-01-26 00:09:23.689857164 +0000 UTC m=+20.918755773,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:23 crc kubenswrapper[5110]: E0126 00:09:23.718758 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f536d5007f2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f536d5007f2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:06.779891698 +0000 UTC m=+4.008790307,LastTimestamp:2026-01-26 00:09:23.713004186 +0000 UTC m=+20.941902795,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:24 crc kubenswrapper[5110]: I0126 00:09:24.201262 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:24 crc kubenswrapper[5110]: I0126 00:09:24.452188 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 26 00:09:24 crc kubenswrapper[5110]: I0126 00:09:24.454995 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"1494fcf6a3932786059a2e15e43ac3ed93352012595d23245f087b55d517a3d8"} Jan 26 00:09:24 crc kubenswrapper[5110]: I0126 00:09:24.455118 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:24 crc kubenswrapper[5110]: I0126 00:09:24.455130 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:24 crc kubenswrapper[5110]: I0126 00:09:24.455864 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:24 crc kubenswrapper[5110]: I0126 00:09:24.455933 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:24 crc kubenswrapper[5110]: I0126 00:09:24.455945 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:24 crc kubenswrapper[5110]: I0126 00:09:24.455891 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:24 crc kubenswrapper[5110]: I0126 00:09:24.456014 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:24 crc kubenswrapper[5110]: I0126 00:09:24.456034 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:24 crc kubenswrapper[5110]: E0126 00:09:24.456531 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:24 crc kubenswrapper[5110]: E0126 00:09:24.456658 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:25 crc kubenswrapper[5110]: I0126 00:09:25.201362 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:25 crc kubenswrapper[5110]: I0126 00:09:25.457634 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:25 crc kubenswrapper[5110]: I0126 00:09:25.457745 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:25 crc kubenswrapper[5110]: I0126 00:09:25.458385 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:25 crc kubenswrapper[5110]: I0126 00:09:25.458426 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:25 crc kubenswrapper[5110]: I0126 00:09:25.458492 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:25 crc kubenswrapper[5110]: E0126 00:09:25.458987 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:25 crc kubenswrapper[5110]: E0126 00:09:25.835846 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 26 00:09:26 crc kubenswrapper[5110]: I0126 00:09:26.201314 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:26 crc kubenswrapper[5110]: I0126 00:09:26.462557 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 26 00:09:26 crc kubenswrapper[5110]: I0126 00:09:26.463553 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 26 00:09:26 crc kubenswrapper[5110]: I0126 00:09:26.465923 5110 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="1494fcf6a3932786059a2e15e43ac3ed93352012595d23245f087b55d517a3d8" exitCode=255 Jan 26 00:09:26 crc kubenswrapper[5110]: I0126 00:09:26.466030 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"1494fcf6a3932786059a2e15e43ac3ed93352012595d23245f087b55d517a3d8"} Jan 26 00:09:26 crc kubenswrapper[5110]: I0126 00:09:26.466111 5110 scope.go:117] "RemoveContainer" containerID="bf3200eee85a5ab80dc6220259b72de8a5e82660bd84c7cf7ec0d7fa38396302" Jan 26 00:09:26 crc kubenswrapper[5110]: I0126 00:09:26.466301 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:26 crc kubenswrapper[5110]: I0126 00:09:26.467187 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:26 crc kubenswrapper[5110]: I0126 00:09:26.467231 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:26 crc kubenswrapper[5110]: I0126 00:09:26.467244 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:26 crc kubenswrapper[5110]: E0126 00:09:26.467711 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:26 crc kubenswrapper[5110]: I0126 00:09:26.468102 5110 scope.go:117] "RemoveContainer" containerID="1494fcf6a3932786059a2e15e43ac3ed93352012595d23245f087b55d517a3d8" Jan 26 00:09:26 crc kubenswrapper[5110]: E0126 00:09:26.468381 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 26 00:09:26 crc kubenswrapper[5110]: E0126 00:09:26.476069 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f5802d5dea8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:26.46833732 +0000 UTC m=+23.697235929,LastTimestamp:2026-01-26 00:09:26.46833732 +0000 UTC m=+23.697235929,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:27 crc kubenswrapper[5110]: I0126 00:09:27.205150 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:27 crc kubenswrapper[5110]: I0126 00:09:27.470423 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 26 00:09:27 crc kubenswrapper[5110]: I0126 00:09:27.473076 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:27 crc kubenswrapper[5110]: I0126 00:09:27.473891 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:27 crc kubenswrapper[5110]: I0126 00:09:27.473939 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:27 crc kubenswrapper[5110]: I0126 00:09:27.473951 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:27 crc kubenswrapper[5110]: E0126 00:09:27.474362 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:27 crc kubenswrapper[5110]: I0126 00:09:27.474733 5110 scope.go:117] "RemoveContainer" containerID="1494fcf6a3932786059a2e15e43ac3ed93352012595d23245f087b55d517a3d8" Jan 26 00:09:27 crc kubenswrapper[5110]: E0126 00:09:27.474989 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 26 00:09:27 crc kubenswrapper[5110]: E0126 00:09:27.479617 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f5802d5dea8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f5802d5dea8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:26.46833732 +0000 UTC m=+23.697235929,LastTimestamp:2026-01-26 00:09:27.474952148 +0000 UTC m=+24.703850757,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:27 crc kubenswrapper[5110]: I0126 00:09:27.628825 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 26 00:09:27 crc kubenswrapper[5110]: I0126 00:09:27.629114 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:27 crc kubenswrapper[5110]: I0126 00:09:27.630252 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:27 crc kubenswrapper[5110]: I0126 00:09:27.630319 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:27 crc kubenswrapper[5110]: I0126 00:09:27.630334 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:27 crc kubenswrapper[5110]: E0126 00:09:27.630954 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:27 crc kubenswrapper[5110]: I0126 00:09:27.640567 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 26 00:09:28 crc kubenswrapper[5110]: I0126 00:09:28.201288 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:28 crc kubenswrapper[5110]: I0126 00:09:28.475578 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:28 crc kubenswrapper[5110]: I0126 00:09:28.476342 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:28 crc kubenswrapper[5110]: I0126 00:09:28.476393 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:28 crc kubenswrapper[5110]: I0126 00:09:28.476406 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:28 crc kubenswrapper[5110]: E0126 00:09:28.477053 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:28 crc kubenswrapper[5110]: E0126 00:09:28.590040 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 26 00:09:28 crc kubenswrapper[5110]: I0126 00:09:28.731432 5110 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:28 crc kubenswrapper[5110]: I0126 00:09:28.731711 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:28 crc kubenswrapper[5110]: I0126 00:09:28.732745 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:28 crc kubenswrapper[5110]: I0126 00:09:28.732828 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:28 crc kubenswrapper[5110]: I0126 00:09:28.732842 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:28 crc kubenswrapper[5110]: E0126 00:09:28.733421 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:28 crc kubenswrapper[5110]: I0126 00:09:28.733959 5110 scope.go:117] "RemoveContainer" containerID="1494fcf6a3932786059a2e15e43ac3ed93352012595d23245f087b55d517a3d8" Jan 26 00:09:28 crc kubenswrapper[5110]: E0126 00:09:28.734289 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 26 00:09:28 crc kubenswrapper[5110]: E0126 00:09:28.740862 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f5802d5dea8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f5802d5dea8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:26.46833732 +0000 UTC m=+23.697235929,LastTimestamp:2026-01-26 00:09:28.734243166 +0000 UTC m=+25.963141775,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:29 crc kubenswrapper[5110]: I0126 00:09:29.024612 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:29 crc kubenswrapper[5110]: I0126 00:09:29.026097 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:29 crc kubenswrapper[5110]: I0126 00:09:29.026167 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:29 crc kubenswrapper[5110]: I0126 00:09:29.026191 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:29 crc kubenswrapper[5110]: I0126 00:09:29.026235 5110 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:09:29 crc kubenswrapper[5110]: E0126 00:09:29.037988 5110 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 26 00:09:29 crc kubenswrapper[5110]: I0126 00:09:29.200680 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:30 crc kubenswrapper[5110]: I0126 00:09:30.200853 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:31 crc kubenswrapper[5110]: I0126 00:09:31.202270 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:31 crc kubenswrapper[5110]: E0126 00:09:31.934983 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 26 00:09:32 crc kubenswrapper[5110]: I0126 00:09:32.201914 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:32 crc kubenswrapper[5110]: E0126 00:09:32.649826 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 26 00:09:32 crc kubenswrapper[5110]: E0126 00:09:32.842148 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 26 00:09:33 crc kubenswrapper[5110]: I0126 00:09:33.201671 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:33 crc kubenswrapper[5110]: E0126 00:09:33.361209 5110 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 00:09:33 crc kubenswrapper[5110]: E0126 00:09:33.845484 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 26 00:09:34 crc kubenswrapper[5110]: I0126 00:09:34.202422 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:35 crc kubenswrapper[5110]: I0126 00:09:35.202515 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:36 crc kubenswrapper[5110]: I0126 00:09:36.038613 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:36 crc kubenswrapper[5110]: I0126 00:09:36.040140 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:36 crc kubenswrapper[5110]: I0126 00:09:36.040238 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:36 crc kubenswrapper[5110]: I0126 00:09:36.040257 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:36 crc kubenswrapper[5110]: I0126 00:09:36.040306 5110 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:09:36 crc kubenswrapper[5110]: E0126 00:09:36.053549 5110 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 26 00:09:36 crc kubenswrapper[5110]: I0126 00:09:36.201883 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:37 crc kubenswrapper[5110]: I0126 00:09:37.202738 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:38 crc kubenswrapper[5110]: I0126 00:09:38.201910 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:39 crc kubenswrapper[5110]: I0126 00:09:39.201654 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:39 crc kubenswrapper[5110]: E0126 00:09:39.847811 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 26 00:09:40 crc kubenswrapper[5110]: I0126 00:09:40.202096 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:40 crc kubenswrapper[5110]: E0126 00:09:40.655968 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 26 00:09:41 crc kubenswrapper[5110]: I0126 00:09:41.200684 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:42 crc kubenswrapper[5110]: I0126 00:09:42.202562 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:43 crc kubenswrapper[5110]: I0126 00:09:43.054022 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:43 crc kubenswrapper[5110]: I0126 00:09:43.055289 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:43 crc kubenswrapper[5110]: I0126 00:09:43.055337 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:43 crc kubenswrapper[5110]: I0126 00:09:43.055350 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:43 crc kubenswrapper[5110]: I0126 00:09:43.055383 5110 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:09:43 crc kubenswrapper[5110]: E0126 00:09:43.065907 5110 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 26 00:09:43 crc kubenswrapper[5110]: I0126 00:09:43.202469 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:43 crc kubenswrapper[5110]: E0126 00:09:43.362092 5110 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 00:09:44 crc kubenswrapper[5110]: I0126 00:09:44.203423 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:44 crc kubenswrapper[5110]: I0126 00:09:44.317361 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:44 crc kubenswrapper[5110]: I0126 00:09:44.319049 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:44 crc kubenswrapper[5110]: I0126 00:09:44.319139 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:44 crc kubenswrapper[5110]: I0126 00:09:44.319160 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:44 crc kubenswrapper[5110]: E0126 00:09:44.319883 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:44 crc kubenswrapper[5110]: I0126 00:09:44.320350 5110 scope.go:117] "RemoveContainer" containerID="1494fcf6a3932786059a2e15e43ac3ed93352012595d23245f087b55d517a3d8" Jan 26 00:09:44 crc kubenswrapper[5110]: E0126 00:09:44.328626 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f535d6f6e58\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f535d6f6e58 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:06.513514072 +0000 UTC m=+3.742412681,LastTimestamp:2026-01-26 00:09:44.321915856 +0000 UTC m=+41.550814505,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:45 crc kubenswrapper[5110]: I0126 00:09:45.203984 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:45 crc kubenswrapper[5110]: E0126 00:09:45.378241 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f536c5f46f2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f536c5f46f2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:06.76411365 +0000 UTC m=+3.993012259,LastTimestamp:2026-01-26 00:09:45.37243429 +0000 UTC m=+42.601332899,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:45 crc kubenswrapper[5110]: E0126 00:09:45.402497 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f536d5007f2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f536d5007f2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:06.779891698 +0000 UTC m=+4.008790307,LastTimestamp:2026-01-26 00:09:45.39552698 +0000 UTC m=+42.624425599,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:45 crc kubenswrapper[5110]: I0126 00:09:45.534062 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 26 00:09:45 crc kubenswrapper[5110]: I0126 00:09:45.535488 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"92d8366f9ae8e301628ffc8198ee8e21708e4d75b089def41eb34c303a18dcad"} Jan 26 00:09:45 crc kubenswrapper[5110]: I0126 00:09:45.535828 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:45 crc kubenswrapper[5110]: I0126 00:09:45.536581 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:45 crc kubenswrapper[5110]: I0126 00:09:45.536618 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:45 crc kubenswrapper[5110]: I0126 00:09:45.536628 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:45 crc kubenswrapper[5110]: E0126 00:09:45.537024 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:46 crc kubenswrapper[5110]: I0126 00:09:46.202822 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:46 crc kubenswrapper[5110]: E0126 00:09:46.854056 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 26 00:09:47 crc kubenswrapper[5110]: I0126 00:09:47.198271 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:47 crc kubenswrapper[5110]: I0126 00:09:47.544453 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 26 00:09:47 crc kubenswrapper[5110]: I0126 00:09:47.545171 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 26 00:09:47 crc kubenswrapper[5110]: I0126 00:09:47.546841 5110 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="92d8366f9ae8e301628ffc8198ee8e21708e4d75b089def41eb34c303a18dcad" exitCode=255 Jan 26 00:09:47 crc kubenswrapper[5110]: I0126 00:09:47.546947 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"92d8366f9ae8e301628ffc8198ee8e21708e4d75b089def41eb34c303a18dcad"} Jan 26 00:09:47 crc kubenswrapper[5110]: I0126 00:09:47.547047 5110 scope.go:117] "RemoveContainer" containerID="1494fcf6a3932786059a2e15e43ac3ed93352012595d23245f087b55d517a3d8" Jan 26 00:09:47 crc kubenswrapper[5110]: I0126 00:09:47.547289 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:47 crc kubenswrapper[5110]: I0126 00:09:47.548018 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:47 crc kubenswrapper[5110]: I0126 00:09:47.548057 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:47 crc kubenswrapper[5110]: I0126 00:09:47.548070 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:47 crc kubenswrapper[5110]: E0126 00:09:47.548427 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:47 crc kubenswrapper[5110]: I0126 00:09:47.548725 5110 scope.go:117] "RemoveContainer" containerID="92d8366f9ae8e301628ffc8198ee8e21708e4d75b089def41eb34c303a18dcad" Jan 26 00:09:47 crc kubenswrapper[5110]: E0126 00:09:47.549054 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 26 00:09:47 crc kubenswrapper[5110]: E0126 00:09:47.555556 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f5802d5dea8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f5802d5dea8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:26.46833732 +0000 UTC m=+23.697235929,LastTimestamp:2026-01-26 00:09:47.549015589 +0000 UTC m=+44.777914198,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:48 crc kubenswrapper[5110]: I0126 00:09:48.203254 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:48 crc kubenswrapper[5110]: I0126 00:09:48.551076 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 26 00:09:48 crc kubenswrapper[5110]: I0126 00:09:48.730737 5110 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:48 crc kubenswrapper[5110]: I0126 00:09:48.731119 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:48 crc kubenswrapper[5110]: I0126 00:09:48.732285 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:48 crc kubenswrapper[5110]: I0126 00:09:48.732340 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:48 crc kubenswrapper[5110]: I0126 00:09:48.732351 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:48 crc kubenswrapper[5110]: E0126 00:09:48.732737 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:48 crc kubenswrapper[5110]: I0126 00:09:48.733176 5110 scope.go:117] "RemoveContainer" containerID="92d8366f9ae8e301628ffc8198ee8e21708e4d75b089def41eb34c303a18dcad" Jan 26 00:09:48 crc kubenswrapper[5110]: E0126 00:09:48.733470 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 26 00:09:48 crc kubenswrapper[5110]: E0126 00:09:48.739111 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f5802d5dea8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f5802d5dea8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:26.46833732 +0000 UTC m=+23.697235929,LastTimestamp:2026-01-26 00:09:48.733432326 +0000 UTC m=+45.962330935,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:49 crc kubenswrapper[5110]: I0126 00:09:49.205677 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:49 crc kubenswrapper[5110]: E0126 00:09:49.518879 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 26 00:09:50 crc kubenswrapper[5110]: I0126 00:09:50.066128 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:50 crc kubenswrapper[5110]: I0126 00:09:50.068108 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:50 crc kubenswrapper[5110]: I0126 00:09:50.068150 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:50 crc kubenswrapper[5110]: I0126 00:09:50.068161 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:50 crc kubenswrapper[5110]: I0126 00:09:50.068187 5110 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:09:50 crc kubenswrapper[5110]: E0126 00:09:50.082316 5110 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 26 00:09:50 crc kubenswrapper[5110]: I0126 00:09:50.202204 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:51 crc kubenswrapper[5110]: I0126 00:09:51.203722 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:52 crc kubenswrapper[5110]: E0126 00:09:52.090414 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 26 00:09:52 crc kubenswrapper[5110]: I0126 00:09:52.201458 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:53 crc kubenswrapper[5110]: I0126 00:09:53.200333 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:53 crc kubenswrapper[5110]: E0126 00:09:53.362988 5110 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 00:09:53 crc kubenswrapper[5110]: E0126 00:09:53.860400 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 26 00:09:54 crc kubenswrapper[5110]: E0126 00:09:54.124194 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 26 00:09:54 crc kubenswrapper[5110]: I0126 00:09:54.206998 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:55 crc kubenswrapper[5110]: I0126 00:09:55.202599 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:55 crc kubenswrapper[5110]: I0126 00:09:55.537040 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:55 crc kubenswrapper[5110]: I0126 00:09:55.537549 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:55 crc kubenswrapper[5110]: I0126 00:09:55.539129 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:55 crc kubenswrapper[5110]: I0126 00:09:55.539206 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:55 crc kubenswrapper[5110]: I0126 00:09:55.539223 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:55 crc kubenswrapper[5110]: E0126 00:09:55.539875 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:55 crc kubenswrapper[5110]: I0126 00:09:55.540356 5110 scope.go:117] "RemoveContainer" containerID="92d8366f9ae8e301628ffc8198ee8e21708e4d75b089def41eb34c303a18dcad" Jan 26 00:09:55 crc kubenswrapper[5110]: E0126 00:09:55.540655 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 26 00:09:55 crc kubenswrapper[5110]: E0126 00:09:55.549971 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f5802d5dea8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f5802d5dea8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:26.46833732 +0000 UTC m=+23.697235929,LastTimestamp:2026-01-26 00:09:55.540607339 +0000 UTC m=+52.769505968,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:56 crc kubenswrapper[5110]: I0126 00:09:56.203601 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:57 crc kubenswrapper[5110]: I0126 00:09:57.083275 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:57 crc kubenswrapper[5110]: I0126 00:09:57.084448 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:57 crc kubenswrapper[5110]: I0126 00:09:57.084487 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:57 crc kubenswrapper[5110]: I0126 00:09:57.084499 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:57 crc kubenswrapper[5110]: I0126 00:09:57.084525 5110 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:09:57 crc kubenswrapper[5110]: E0126 00:09:57.096336 5110 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 26 00:09:57 crc kubenswrapper[5110]: I0126 00:09:57.201071 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:58 crc kubenswrapper[5110]: I0126 00:09:58.204489 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:58 crc kubenswrapper[5110]: I0126 00:09:58.400496 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:09:58 crc kubenswrapper[5110]: I0126 00:09:58.400838 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:58 crc kubenswrapper[5110]: I0126 00:09:58.402012 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:58 crc kubenswrapper[5110]: I0126 00:09:58.402063 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:58 crc kubenswrapper[5110]: I0126 00:09:58.402076 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:58 crc kubenswrapper[5110]: E0126 00:09:58.402418 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:58 crc kubenswrapper[5110]: E0126 00:09:58.458495 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 26 00:09:59 crc kubenswrapper[5110]: I0126 00:09:59.204692 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:00 crc kubenswrapper[5110]: I0126 00:10:00.206002 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:00 crc kubenswrapper[5110]: E0126 00:10:00.871108 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 26 00:10:01 crc kubenswrapper[5110]: I0126 00:10:01.201118 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:02 crc kubenswrapper[5110]: I0126 00:10:02.204444 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:03 crc kubenswrapper[5110]: I0126 00:10:03.204936 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:03 crc kubenswrapper[5110]: E0126 00:10:03.364552 5110 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 00:10:04 crc kubenswrapper[5110]: I0126 00:10:04.096710 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:04 crc kubenswrapper[5110]: I0126 00:10:04.097888 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:04 crc kubenswrapper[5110]: I0126 00:10:04.097927 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:04 crc kubenswrapper[5110]: I0126 00:10:04.097936 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:04 crc kubenswrapper[5110]: I0126 00:10:04.097965 5110 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:10:04 crc kubenswrapper[5110]: E0126 00:10:04.110297 5110 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 26 00:10:04 crc kubenswrapper[5110]: I0126 00:10:04.204765 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:05 crc kubenswrapper[5110]: I0126 00:10:05.203905 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:06 crc kubenswrapper[5110]: I0126 00:10:06.202450 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:07 crc kubenswrapper[5110]: I0126 00:10:07.201326 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:07 crc kubenswrapper[5110]: E0126 00:10:07.877238 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 26 00:10:08 crc kubenswrapper[5110]: I0126 00:10:08.201188 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:08 crc kubenswrapper[5110]: I0126 00:10:08.246685 5110 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-qh4f4" Jan 26 00:10:08 crc kubenswrapper[5110]: I0126 00:10:08.264002 5110 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-qh4f4" Jan 26 00:10:08 crc kubenswrapper[5110]: I0126 00:10:08.317167 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:08 crc kubenswrapper[5110]: I0126 00:10:08.318179 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:08 crc kubenswrapper[5110]: I0126 00:10:08.318212 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:08 crc kubenswrapper[5110]: I0126 00:10:08.318225 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:08 crc kubenswrapper[5110]: E0126 00:10:08.318620 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:10:08 crc kubenswrapper[5110]: I0126 00:10:08.318977 5110 scope.go:117] "RemoveContainer" containerID="92d8366f9ae8e301628ffc8198ee8e21708e4d75b089def41eb34c303a18dcad" Jan 26 00:10:08 crc kubenswrapper[5110]: I0126 00:10:08.335940 5110 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 26 00:10:09 crc kubenswrapper[5110]: I0126 00:10:09.069398 5110 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 26 00:10:09 crc kubenswrapper[5110]: I0126 00:10:09.266596 5110 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-02-25 00:05:08 +0000 UTC" deadline="2026-02-20 10:49:41.013894078 +0000 UTC" Jan 26 00:10:09 crc kubenswrapper[5110]: I0126 00:10:09.266689 5110 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="610h39m31.747210247s" Jan 26 00:10:09 crc kubenswrapper[5110]: I0126 00:10:09.615240 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 26 00:10:09 crc kubenswrapper[5110]: I0126 00:10:09.617158 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"3202209affc7d7c4869fa3cbbd7b3a166dee850e7789b407826358743c13e508"} Jan 26 00:10:09 crc kubenswrapper[5110]: I0126 00:10:09.617412 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:09 crc kubenswrapper[5110]: I0126 00:10:09.618349 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:09 crc kubenswrapper[5110]: I0126 00:10:09.618397 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:09 crc kubenswrapper[5110]: I0126 00:10:09.618432 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:09 crc kubenswrapper[5110]: E0126 00:10:09.619183 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:10:10 crc kubenswrapper[5110]: I0126 00:10:10.621187 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 26 00:10:10 crc kubenswrapper[5110]: I0126 00:10:10.622362 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 26 00:10:10 crc kubenswrapper[5110]: I0126 00:10:10.624610 5110 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="3202209affc7d7c4869fa3cbbd7b3a166dee850e7789b407826358743c13e508" exitCode=255 Jan 26 00:10:10 crc kubenswrapper[5110]: I0126 00:10:10.624674 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"3202209affc7d7c4869fa3cbbd7b3a166dee850e7789b407826358743c13e508"} Jan 26 00:10:10 crc kubenswrapper[5110]: I0126 00:10:10.624729 5110 scope.go:117] "RemoveContainer" containerID="92d8366f9ae8e301628ffc8198ee8e21708e4d75b089def41eb34c303a18dcad" Jan 26 00:10:10 crc kubenswrapper[5110]: I0126 00:10:10.624950 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:10 crc kubenswrapper[5110]: I0126 00:10:10.625451 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:10 crc kubenswrapper[5110]: I0126 00:10:10.625488 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:10 crc kubenswrapper[5110]: I0126 00:10:10.625499 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:10 crc kubenswrapper[5110]: E0126 00:10:10.625941 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:10:10 crc kubenswrapper[5110]: I0126 00:10:10.626235 5110 scope.go:117] "RemoveContainer" containerID="3202209affc7d7c4869fa3cbbd7b3a166dee850e7789b407826358743c13e508" Jan 26 00:10:10 crc kubenswrapper[5110]: E0126 00:10:10.626471 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 26 00:10:11 crc kubenswrapper[5110]: I0126 00:10:11.110870 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:11 crc kubenswrapper[5110]: I0126 00:10:11.111971 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:11 crc kubenswrapper[5110]: I0126 00:10:11.112014 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:11 crc kubenswrapper[5110]: I0126 00:10:11.112028 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:11 crc kubenswrapper[5110]: I0126 00:10:11.112191 5110 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:10:11 crc kubenswrapper[5110]: I0126 00:10:11.123135 5110 kubelet_node_status.go:127] "Node was previously registered" node="crc" Jan 26 00:10:11 crc kubenswrapper[5110]: I0126 00:10:11.123430 5110 kubelet_node_status.go:81] "Successfully registered node" node="crc" Jan 26 00:10:11 crc kubenswrapper[5110]: E0126 00:10:11.123463 5110 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 26 00:10:11 crc kubenswrapper[5110]: I0126 00:10:11.126815 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:11 crc kubenswrapper[5110]: I0126 00:10:11.126922 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:11 crc kubenswrapper[5110]: I0126 00:10:11.126935 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:11 crc kubenswrapper[5110]: I0126 00:10:11.126961 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:11 crc kubenswrapper[5110]: I0126 00:10:11.126974 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:11Z","lastTransitionTime":"2026-01-26T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:11 crc kubenswrapper[5110]: E0126 00:10:11.149342 5110 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"649ae373-4a50-43e7-bb88-2a6949129be7\\\",\\\"systemUUID\\\":\\\"8b97abb7-24be-4a3b-9f16-cd27402370ca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:11 crc kubenswrapper[5110]: I0126 00:10:11.157900 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:11 crc kubenswrapper[5110]: I0126 00:10:11.157981 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:11 crc kubenswrapper[5110]: I0126 00:10:11.157996 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:11 crc kubenswrapper[5110]: I0126 00:10:11.158020 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:11 crc kubenswrapper[5110]: I0126 00:10:11.158036 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:11Z","lastTransitionTime":"2026-01-26T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:11 crc kubenswrapper[5110]: E0126 00:10:11.168680 5110 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"649ae373-4a50-43e7-bb88-2a6949129be7\\\",\\\"systemUUID\\\":\\\"8b97abb7-24be-4a3b-9f16-cd27402370ca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:11 crc kubenswrapper[5110]: I0126 00:10:11.176392 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:11 crc kubenswrapper[5110]: I0126 00:10:11.176444 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:11 crc kubenswrapper[5110]: I0126 00:10:11.176458 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:11 crc kubenswrapper[5110]: I0126 00:10:11.176478 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:11 crc kubenswrapper[5110]: I0126 00:10:11.176490 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:11Z","lastTransitionTime":"2026-01-26T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:11 crc kubenswrapper[5110]: E0126 00:10:11.188217 5110 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"649ae373-4a50-43e7-bb88-2a6949129be7\\\",\\\"systemUUID\\\":\\\"8b97abb7-24be-4a3b-9f16-cd27402370ca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:11 crc kubenswrapper[5110]: I0126 00:10:11.196678 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:11 crc kubenswrapper[5110]: I0126 00:10:11.196734 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:11 crc kubenswrapper[5110]: I0126 00:10:11.196750 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:11 crc kubenswrapper[5110]: I0126 00:10:11.196769 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:11 crc kubenswrapper[5110]: I0126 00:10:11.196784 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:11Z","lastTransitionTime":"2026-01-26T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:11 crc kubenswrapper[5110]: E0126 00:10:11.208232 5110 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"649ae373-4a50-43e7-bb88-2a6949129be7\\\",\\\"systemUUID\\\":\\\"8b97abb7-24be-4a3b-9f16-cd27402370ca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:11 crc kubenswrapper[5110]: E0126 00:10:11.208383 5110 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 26 00:10:11 crc kubenswrapper[5110]: E0126 00:10:11.208425 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:11 crc kubenswrapper[5110]: E0126 00:10:11.309443 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:11 crc kubenswrapper[5110]: E0126 00:10:11.409930 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:11 crc kubenswrapper[5110]: E0126 00:10:11.511115 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:11 crc kubenswrapper[5110]: E0126 00:10:11.612443 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:11 crc kubenswrapper[5110]: I0126 00:10:11.631165 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 26 00:10:11 crc kubenswrapper[5110]: E0126 00:10:11.713171 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:11 crc kubenswrapper[5110]: E0126 00:10:11.814092 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:11 crc kubenswrapper[5110]: E0126 00:10:11.915187 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:12 crc kubenswrapper[5110]: E0126 00:10:12.015358 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:12 crc kubenswrapper[5110]: E0126 00:10:12.115731 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:12 crc kubenswrapper[5110]: E0126 00:10:12.215871 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:12 crc kubenswrapper[5110]: E0126 00:10:12.317038 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:12 crc kubenswrapper[5110]: E0126 00:10:12.417515 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:12 crc kubenswrapper[5110]: E0126 00:10:12.518501 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:12 crc kubenswrapper[5110]: E0126 00:10:12.619359 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:12 crc kubenswrapper[5110]: E0126 00:10:12.720538 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:12 crc kubenswrapper[5110]: E0126 00:10:12.821712 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:12 crc kubenswrapper[5110]: E0126 00:10:12.922816 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:13 crc kubenswrapper[5110]: E0126 00:10:13.023847 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:13 crc kubenswrapper[5110]: E0126 00:10:13.125038 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:13 crc kubenswrapper[5110]: E0126 00:10:13.225342 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:13 crc kubenswrapper[5110]: E0126 00:10:13.326431 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:13 crc kubenswrapper[5110]: E0126 00:10:13.365945 5110 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 00:10:13 crc kubenswrapper[5110]: E0126 00:10:13.426538 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:13 crc kubenswrapper[5110]: E0126 00:10:13.526912 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:13 crc kubenswrapper[5110]: E0126 00:10:13.628060 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:13 crc kubenswrapper[5110]: E0126 00:10:13.729147 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:13 crc kubenswrapper[5110]: E0126 00:10:13.829824 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:13 crc kubenswrapper[5110]: E0126 00:10:13.930657 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:14 crc kubenswrapper[5110]: E0126 00:10:14.031343 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:14 crc kubenswrapper[5110]: E0126 00:10:14.132187 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:14 crc kubenswrapper[5110]: E0126 00:10:14.232333 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:14 crc kubenswrapper[5110]: E0126 00:10:14.333215 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:14 crc kubenswrapper[5110]: E0126 00:10:14.433732 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:14 crc kubenswrapper[5110]: E0126 00:10:14.534583 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:14 crc kubenswrapper[5110]: E0126 00:10:14.635671 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:14 crc kubenswrapper[5110]: E0126 00:10:14.735903 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:14 crc kubenswrapper[5110]: E0126 00:10:14.836306 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:14 crc kubenswrapper[5110]: E0126 00:10:14.937379 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:15 crc kubenswrapper[5110]: E0126 00:10:15.038286 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:15 crc kubenswrapper[5110]: E0126 00:10:15.138949 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:15 crc kubenswrapper[5110]: E0126 00:10:15.240093 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:15 crc kubenswrapper[5110]: E0126 00:10:15.340695 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:15 crc kubenswrapper[5110]: E0126 00:10:15.441250 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:15 crc kubenswrapper[5110]: E0126 00:10:15.542394 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:15 crc kubenswrapper[5110]: E0126 00:10:15.642509 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:15 crc kubenswrapper[5110]: E0126 00:10:15.743507 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:15 crc kubenswrapper[5110]: E0126 00:10:15.843859 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:15 crc kubenswrapper[5110]: E0126 00:10:15.944003 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:16 crc kubenswrapper[5110]: E0126 00:10:16.045212 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:16 crc kubenswrapper[5110]: E0126 00:10:16.146191 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:16 crc kubenswrapper[5110]: E0126 00:10:16.246628 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:16 crc kubenswrapper[5110]: E0126 00:10:16.347248 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:16 crc kubenswrapper[5110]: E0126 00:10:16.448195 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:16 crc kubenswrapper[5110]: E0126 00:10:16.548853 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:16 crc kubenswrapper[5110]: E0126 00:10:16.649839 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:16 crc kubenswrapper[5110]: E0126 00:10:16.750900 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:16 crc kubenswrapper[5110]: E0126 00:10:16.851751 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:16 crc kubenswrapper[5110]: E0126 00:10:16.952898 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:17 crc kubenswrapper[5110]: E0126 00:10:17.053896 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:17 crc kubenswrapper[5110]: E0126 00:10:17.154436 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:17 crc kubenswrapper[5110]: E0126 00:10:17.255171 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:17 crc kubenswrapper[5110]: E0126 00:10:17.355771 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:17 crc kubenswrapper[5110]: E0126 00:10:17.455974 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:17 crc kubenswrapper[5110]: E0126 00:10:17.557022 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:17 crc kubenswrapper[5110]: E0126 00:10:17.657782 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:17 crc kubenswrapper[5110]: E0126 00:10:17.758881 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:17 crc kubenswrapper[5110]: E0126 00:10:17.859458 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:17 crc kubenswrapper[5110]: E0126 00:10:17.960286 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:18 crc kubenswrapper[5110]: E0126 00:10:18.061421 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:18 crc kubenswrapper[5110]: E0126 00:10:18.162577 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:18 crc kubenswrapper[5110]: E0126 00:10:18.262812 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:18 crc kubenswrapper[5110]: E0126 00:10:18.363493 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:18 crc kubenswrapper[5110]: E0126 00:10:18.464255 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:18 crc kubenswrapper[5110]: E0126 00:10:18.565141 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:18 crc kubenswrapper[5110]: E0126 00:10:18.665947 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:18 crc kubenswrapper[5110]: I0126 00:10:18.730549 5110 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:10:18 crc kubenswrapper[5110]: I0126 00:10:18.730918 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:18 crc kubenswrapper[5110]: I0126 00:10:18.732089 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:18 crc kubenswrapper[5110]: I0126 00:10:18.732198 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:18 crc kubenswrapper[5110]: I0126 00:10:18.732262 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:18 crc kubenswrapper[5110]: E0126 00:10:18.732761 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:10:18 crc kubenswrapper[5110]: I0126 00:10:18.733141 5110 scope.go:117] "RemoveContainer" containerID="3202209affc7d7c4869fa3cbbd7b3a166dee850e7789b407826358743c13e508" Jan 26 00:10:18 crc kubenswrapper[5110]: E0126 00:10:18.733434 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 26 00:10:18 crc kubenswrapper[5110]: E0126 00:10:18.767032 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:18 crc kubenswrapper[5110]: E0126 00:10:18.867885 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:18 crc kubenswrapper[5110]: E0126 00:10:18.969002 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:19 crc kubenswrapper[5110]: E0126 00:10:19.069959 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:19 crc kubenswrapper[5110]: E0126 00:10:19.171380 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:19 crc kubenswrapper[5110]: E0126 00:10:19.272415 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:19 crc kubenswrapper[5110]: E0126 00:10:19.372884 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:19 crc kubenswrapper[5110]: E0126 00:10:19.474074 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:19 crc kubenswrapper[5110]: E0126 00:10:19.575377 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:19 crc kubenswrapper[5110]: I0126 00:10:19.618123 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:10:19 crc kubenswrapper[5110]: I0126 00:10:19.653732 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:19 crc kubenswrapper[5110]: I0126 00:10:19.654453 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:19 crc kubenswrapper[5110]: I0126 00:10:19.654574 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:19 crc kubenswrapper[5110]: I0126 00:10:19.654641 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:19 crc kubenswrapper[5110]: E0126 00:10:19.655192 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:10:19 crc kubenswrapper[5110]: I0126 00:10:19.655559 5110 scope.go:117] "RemoveContainer" containerID="3202209affc7d7c4869fa3cbbd7b3a166dee850e7789b407826358743c13e508" Jan 26 00:10:19 crc kubenswrapper[5110]: E0126 00:10:19.655875 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 26 00:10:19 crc kubenswrapper[5110]: E0126 00:10:19.676375 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:19 crc kubenswrapper[5110]: E0126 00:10:19.777491 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:19 crc kubenswrapper[5110]: E0126 00:10:19.878509 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:19 crc kubenswrapper[5110]: E0126 00:10:19.979397 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:20 crc kubenswrapper[5110]: E0126 00:10:20.080657 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:20 crc kubenswrapper[5110]: E0126 00:10:20.181746 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:20 crc kubenswrapper[5110]: E0126 00:10:20.283054 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:20 crc kubenswrapper[5110]: E0126 00:10:20.384274 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:20 crc kubenswrapper[5110]: E0126 00:10:20.485268 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:20 crc kubenswrapper[5110]: I0126 00:10:20.577479 5110 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 26 00:10:20 crc kubenswrapper[5110]: E0126 00:10:20.586438 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:20 crc kubenswrapper[5110]: E0126 00:10:20.686894 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:20 crc kubenswrapper[5110]: E0126 00:10:20.788465 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:20 crc kubenswrapper[5110]: E0126 00:10:20.889338 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:20 crc kubenswrapper[5110]: E0126 00:10:20.990115 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:21 crc kubenswrapper[5110]: E0126 00:10:21.091129 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:21 crc kubenswrapper[5110]: E0126 00:10:21.192533 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:21 crc kubenswrapper[5110]: E0126 00:10:21.293331 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:21 crc kubenswrapper[5110]: I0126 00:10:21.316831 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:21 crc kubenswrapper[5110]: I0126 00:10:21.318156 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:21 crc kubenswrapper[5110]: I0126 00:10:21.318333 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:21 crc kubenswrapper[5110]: I0126 00:10:21.318409 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:21 crc kubenswrapper[5110]: E0126 00:10:21.318977 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:10:21 crc kubenswrapper[5110]: E0126 00:10:21.393898 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:21 crc kubenswrapper[5110]: E0126 00:10:21.398135 5110 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 26 00:10:21 crc kubenswrapper[5110]: I0126 00:10:21.403364 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:21 crc kubenswrapper[5110]: I0126 00:10:21.403438 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:21 crc kubenswrapper[5110]: I0126 00:10:21.403457 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:21 crc kubenswrapper[5110]: I0126 00:10:21.403496 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:21 crc kubenswrapper[5110]: I0126 00:10:21.403515 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:21Z","lastTransitionTime":"2026-01-26T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:21 crc kubenswrapper[5110]: E0126 00:10:21.414303 5110 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"649ae373-4a50-43e7-bb88-2a6949129be7\\\",\\\"systemUUID\\\":\\\"8b97abb7-24be-4a3b-9f16-cd27402370ca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:21 crc kubenswrapper[5110]: I0126 00:10:21.417858 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:21 crc kubenswrapper[5110]: I0126 00:10:21.417918 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:21 crc kubenswrapper[5110]: I0126 00:10:21.417927 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:21 crc kubenswrapper[5110]: I0126 00:10:21.417944 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:21 crc kubenswrapper[5110]: I0126 00:10:21.417956 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:21Z","lastTransitionTime":"2026-01-26T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:21 crc kubenswrapper[5110]: E0126 00:10:21.428875 5110 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"649ae373-4a50-43e7-bb88-2a6949129be7\\\",\\\"systemUUID\\\":\\\"8b97abb7-24be-4a3b-9f16-cd27402370ca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:21 crc kubenswrapper[5110]: I0126 00:10:21.433004 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:21 crc kubenswrapper[5110]: I0126 00:10:21.433055 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:21 crc kubenswrapper[5110]: I0126 00:10:21.433066 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:21 crc kubenswrapper[5110]: I0126 00:10:21.433085 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:21 crc kubenswrapper[5110]: I0126 00:10:21.433096 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:21Z","lastTransitionTime":"2026-01-26T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:21 crc kubenswrapper[5110]: E0126 00:10:21.451367 5110 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"649ae373-4a50-43e7-bb88-2a6949129be7\\\",\\\"systemUUID\\\":\\\"8b97abb7-24be-4a3b-9f16-cd27402370ca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:21 crc kubenswrapper[5110]: I0126 00:10:21.455634 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:21 crc kubenswrapper[5110]: I0126 00:10:21.455665 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:21 crc kubenswrapper[5110]: I0126 00:10:21.455676 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:21 crc kubenswrapper[5110]: I0126 00:10:21.455689 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:21 crc kubenswrapper[5110]: I0126 00:10:21.455699 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:21Z","lastTransitionTime":"2026-01-26T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:21 crc kubenswrapper[5110]: E0126 00:10:21.467433 5110 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"649ae373-4a50-43e7-bb88-2a6949129be7\\\",\\\"systemUUID\\\":\\\"8b97abb7-24be-4a3b-9f16-cd27402370ca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:21 crc kubenswrapper[5110]: E0126 00:10:21.467550 5110 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 26 00:10:21 crc kubenswrapper[5110]: E0126 00:10:21.494868 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:21 crc kubenswrapper[5110]: E0126 00:10:21.595235 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:21 crc kubenswrapper[5110]: E0126 00:10:21.696044 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:21 crc kubenswrapper[5110]: E0126 00:10:21.797036 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:21 crc kubenswrapper[5110]: E0126 00:10:21.897487 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:21 crc kubenswrapper[5110]: E0126 00:10:21.997690 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:22 crc kubenswrapper[5110]: E0126 00:10:22.098453 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:22 crc kubenswrapper[5110]: E0126 00:10:22.199195 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:22 crc kubenswrapper[5110]: E0126 00:10:22.299368 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:22 crc kubenswrapper[5110]: E0126 00:10:22.400684 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:22 crc kubenswrapper[5110]: E0126 00:10:22.501441 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:22 crc kubenswrapper[5110]: E0126 00:10:22.602169 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:22 crc kubenswrapper[5110]: E0126 00:10:22.702596 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:22 crc kubenswrapper[5110]: E0126 00:10:22.803327 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:22 crc kubenswrapper[5110]: E0126 00:10:22.904411 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:23 crc kubenswrapper[5110]: E0126 00:10:23.005083 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:23 crc kubenswrapper[5110]: E0126 00:10:23.105548 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:23 crc kubenswrapper[5110]: E0126 00:10:23.206075 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:23 crc kubenswrapper[5110]: E0126 00:10:23.306606 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:23 crc kubenswrapper[5110]: E0126 00:10:23.367010 5110 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 00:10:23 crc kubenswrapper[5110]: E0126 00:10:23.407329 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:23 crc kubenswrapper[5110]: E0126 00:10:23.508223 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:23 crc kubenswrapper[5110]: E0126 00:10:23.608530 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:23 crc kubenswrapper[5110]: E0126 00:10:23.709542 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:23 crc kubenswrapper[5110]: E0126 00:10:23.809920 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:23 crc kubenswrapper[5110]: E0126 00:10:23.910234 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:24 crc kubenswrapper[5110]: E0126 00:10:24.011118 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:24 crc kubenswrapper[5110]: E0126 00:10:24.111288 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:24 crc kubenswrapper[5110]: E0126 00:10:24.212315 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:24 crc kubenswrapper[5110]: E0126 00:10:24.313547 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:24 crc kubenswrapper[5110]: E0126 00:10:24.413714 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:24 crc kubenswrapper[5110]: E0126 00:10:24.514500 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:24 crc kubenswrapper[5110]: E0126 00:10:24.615423 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:24 crc kubenswrapper[5110]: E0126 00:10:24.716583 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:24 crc kubenswrapper[5110]: E0126 00:10:24.816680 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:24 crc kubenswrapper[5110]: E0126 00:10:24.917890 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:25 crc kubenswrapper[5110]: E0126 00:10:25.018650 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:25 crc kubenswrapper[5110]: E0126 00:10:25.119339 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:25 crc kubenswrapper[5110]: E0126 00:10:25.220130 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:25 crc kubenswrapper[5110]: E0126 00:10:25.321530 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:25 crc kubenswrapper[5110]: E0126 00:10:25.422350 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:25 crc kubenswrapper[5110]: E0126 00:10:25.523551 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:25 crc kubenswrapper[5110]: E0126 00:10:25.624452 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:25 crc kubenswrapper[5110]: E0126 00:10:25.725342 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:25 crc kubenswrapper[5110]: E0126 00:10:25.826179 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:25 crc kubenswrapper[5110]: E0126 00:10:25.927289 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.028276 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.051781 5110 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.124580 5110 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.130837 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.130893 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.130908 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.130930 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.130947 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:26Z","lastTransitionTime":"2026-01-26T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.137448 5110 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.233706 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.233763 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.233777 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.233819 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.233837 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:26Z","lastTransitionTime":"2026-01-26T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.235865 5110 apiserver.go:52] "Watching apiserver" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.238734 5110 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.244246 5110 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.244689 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc","openshift-multus/network-metrics-daemon-8ndzr","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-network-diagnostics/network-check-target-fhkjl","openshift-dns/node-resolver-6wqjg","openshift-image-registry/node-ca-p427g","openshift-multus/multus-additional-cni-plugins-77v2r","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-qgzzt","openshift-ovn-kubernetes/ovnkube-node-bnkth","openshift-machine-config-operator/machine-config-daemon-c6tpr","openshift-multus/multus-jh4hk","openshift-network-node-identity/network-node-identity-dgvkt","openshift-network-operator/iptables-alerter-5jnd7"] Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.246128 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.246756 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.246779 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.246947 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.246980 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.247601 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.248077 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.248204 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.248959 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.249013 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.249067 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.249179 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.250394 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.250812 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.250991 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.251055 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.251148 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.251482 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.267633 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.277992 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.288054 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.298827 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.309133 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.320738 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.333636 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.336669 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.336768 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.336819 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.336853 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.336875 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:26Z","lastTransitionTime":"2026-01-26T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.338358 5110 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.380636 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.380695 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.380736 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/db003609-47aa-4a6a-a7ec-6dbc03ded29a-os-release\") pod \"multus-additional-cni-plugins-77v2r\" (UID: \"db003609-47aa-4a6a-a7ec-6dbc03ded29a\") " pod="openshift-multus/multus-additional-cni-plugins-77v2r" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.380780 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.380833 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.380868 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.380891 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/db003609-47aa-4a6a-a7ec-6dbc03ded29a-cni-binary-copy\") pod \"multus-additional-cni-plugins-77v2r\" (UID: \"db003609-47aa-4a6a-a7ec-6dbc03ded29a\") " pod="openshift-multus/multus-additional-cni-plugins-77v2r" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.380912 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/db003609-47aa-4a6a-a7ec-6dbc03ded29a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-77v2r\" (UID: \"db003609-47aa-4a6a-a7ec-6dbc03ded29a\") " pod="openshift-multus/multus-additional-cni-plugins-77v2r" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.380936 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.381023 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.381130 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.381206 5110 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.381324 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:26.881286196 +0000 UTC m=+84.110184805 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.381518 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/db003609-47aa-4a6a-a7ec-6dbc03ded29a-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-77v2r\" (UID: \"db003609-47aa-4a6a-a7ec-6dbc03ded29a\") " pod="openshift-multus/multus-additional-cni-plugins-77v2r" Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.381644 5110 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.381812 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:26.88176641 +0000 UTC m=+84.110665019 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.381881 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qwlp\" (UniqueName: \"kubernetes.io/projected/db003609-47aa-4a6a-a7ec-6dbc03ded29a-kube-api-access-8qwlp\") pod \"multus-additional-cni-plugins-77v2r\" (UID: \"db003609-47aa-4a6a-a7ec-6dbc03ded29a\") " pod="openshift-multus/multus-additional-cni-plugins-77v2r" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.381940 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.381960 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.382100 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.382197 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/db003609-47aa-4a6a-a7ec-6dbc03ded29a-cnibin\") pod \"multus-additional-cni-plugins-77v2r\" (UID: \"db003609-47aa-4a6a-a7ec-6dbc03ded29a\") " pod="openshift-multus/multus-additional-cni-plugins-77v2r" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.382257 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/db003609-47aa-4a6a-a7ec-6dbc03ded29a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-77v2r\" (UID: \"db003609-47aa-4a6a-a7ec-6dbc03ded29a\") " pod="openshift-multus/multus-additional-cni-plugins-77v2r" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.382297 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.382341 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.382377 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.382409 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/db003609-47aa-4a6a-a7ec-6dbc03ded29a-system-cni-dir\") pod \"multus-additional-cni-plugins-77v2r\" (UID: \"db003609-47aa-4a6a-a7ec-6dbc03ded29a\") " pod="openshift-multus/multus-additional-cni-plugins-77v2r" Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.396218 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.396257 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.396274 5110 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.396313 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.396336 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.396349 5110 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.396365 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:26.896338101 +0000 UTC m=+84.125236710 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.396398 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:26.896385602 +0000 UTC m=+84.125284211 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.415871 5110 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.415939 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.416568 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.421331 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.421336 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.421423 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.421613 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.422324 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.426113 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.427335 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-77v2r" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.430205 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.430262 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.431398 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.431403 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.431433 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.431817 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-p427g" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.432054 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.434785 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.435335 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.435640 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.436118 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.440376 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.440471 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.440482 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.440504 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.441569 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:26Z","lastTransitionTime":"2026-01-26T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.443127 5110 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.445113 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.461671 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77v2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db003609-47aa-4a6a-a7ec-6dbc03ded29a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77v2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.465864 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-6wqjg" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.468754 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-qgzzt" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.469205 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.469438 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.470370 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.470847 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.472565 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.472593 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.473595 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.473692 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.473990 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.474712 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.475511 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.476840 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.476862 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.477441 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.478374 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.480341 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.480413 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.481017 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.481226 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.482273 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.482618 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/db003609-47aa-4a6a-a7ec-6dbc03ded29a-cni-binary-copy\") pod \"multus-additional-cni-plugins-77v2r\" (UID: \"db003609-47aa-4a6a-a7ec-6dbc03ded29a\") " pod="openshift-multus/multus-additional-cni-plugins-77v2r" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.482658 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/db003609-47aa-4a6a-a7ec-6dbc03ded29a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-77v2r\" (UID: \"db003609-47aa-4a6a-a7ec-6dbc03ded29a\") " pod="openshift-multus/multus-additional-cni-plugins-77v2r" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.482747 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/db003609-47aa-4a6a-a7ec-6dbc03ded29a-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-77v2r\" (UID: \"db003609-47aa-4a6a-a7ec-6dbc03ded29a\") " pod="openshift-multus/multus-additional-cni-plugins-77v2r" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.482777 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8qwlp\" (UniqueName: \"kubernetes.io/projected/db003609-47aa-4a6a-a7ec-6dbc03ded29a-kube-api-access-8qwlp\") pod \"multus-additional-cni-plugins-77v2r\" (UID: \"db003609-47aa-4a6a-a7ec-6dbc03ded29a\") " pod="openshift-multus/multus-additional-cni-plugins-77v2r" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.482905 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.483398 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.483423 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/db003609-47aa-4a6a-a7ec-6dbc03ded29a-cni-binary-copy\") pod \"multus-additional-cni-plugins-77v2r\" (UID: \"db003609-47aa-4a6a-a7ec-6dbc03ded29a\") " pod="openshift-multus/multus-additional-cni-plugins-77v2r" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.483462 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/db003609-47aa-4a6a-a7ec-6dbc03ded29a-cnibin\") pod \"multus-additional-cni-plugins-77v2r\" (UID: \"db003609-47aa-4a6a-a7ec-6dbc03ded29a\") " pod="openshift-multus/multus-additional-cni-plugins-77v2r" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.483543 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/db003609-47aa-4a6a-a7ec-6dbc03ded29a-cnibin\") pod \"multus-additional-cni-plugins-77v2r\" (UID: \"db003609-47aa-4a6a-a7ec-6dbc03ded29a\") " pod="openshift-multus/multus-additional-cni-plugins-77v2r" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.483496 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/db003609-47aa-4a6a-a7ec-6dbc03ded29a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-77v2r\" (UID: \"db003609-47aa-4a6a-a7ec-6dbc03ded29a\") " pod="openshift-multus/multus-additional-cni-plugins-77v2r" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.483595 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.483596 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/db003609-47aa-4a6a-a7ec-6dbc03ded29a-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-77v2r\" (UID: \"db003609-47aa-4a6a-a7ec-6dbc03ded29a\") " pod="openshift-multus/multus-additional-cni-plugins-77v2r" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.483653 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/db003609-47aa-4a6a-a7ec-6dbc03ded29a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-77v2r\" (UID: \"db003609-47aa-4a6a-a7ec-6dbc03ded29a\") " pod="openshift-multus/multus-additional-cni-plugins-77v2r" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.483664 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/db003609-47aa-4a6a-a7ec-6dbc03ded29a-system-cni-dir\") pod \"multus-additional-cni-plugins-77v2r\" (UID: \"db003609-47aa-4a6a-a7ec-6dbc03ded29a\") " pod="openshift-multus/multus-additional-cni-plugins-77v2r" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.483760 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/db003609-47aa-4a6a-a7ec-6dbc03ded29a-os-release\") pod \"multus-additional-cni-plugins-77v2r\" (UID: \"db003609-47aa-4a6a-a7ec-6dbc03ded29a\") " pod="openshift-multus/multus-additional-cni-plugins-77v2r" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.483693 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/db003609-47aa-4a6a-a7ec-6dbc03ded29a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-77v2r\" (UID: \"db003609-47aa-4a6a-a7ec-6dbc03ded29a\") " pod="openshift-multus/multus-additional-cni-plugins-77v2r" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.483709 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/db003609-47aa-4a6a-a7ec-6dbc03ded29a-system-cni-dir\") pod \"multus-additional-cni-plugins-77v2r\" (UID: \"db003609-47aa-4a6a-a7ec-6dbc03ded29a\") " pod="openshift-multus/multus-additional-cni-plugins-77v2r" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.483698 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.483913 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/db003609-47aa-4a6a-a7ec-6dbc03ded29a-os-release\") pod \"multus-additional-cni-plugins-77v2r\" (UID: \"db003609-47aa-4a6a-a7ec-6dbc03ded29a\") " pod="openshift-multus/multus-additional-cni-plugins-77v2r" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.485372 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.485440 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.485524 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.486365 5110 scope.go:117] "RemoveContainer" containerID="3202209affc7d7c4869fa3cbbd7b3a166dee850e7789b407826358743c13e508" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.486454 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8ndzr" Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.486530 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8ndzr" podUID="040d3d5f-c02a-4a70-92af-70700fd9e3c3" Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.486611 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.487897 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.487991 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.490068 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.500267 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qwlp\" (UniqueName: \"kubernetes.io/projected/db003609-47aa-4a6a-a7ec-6dbc03ded29a-kube-api-access-8qwlp\") pod \"multus-additional-cni-plugins-77v2r\" (UID: \"db003609-47aa-4a6a-a7ec-6dbc03ded29a\") " pod="openshift-multus/multus-additional-cni-plugins-77v2r" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.500581 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.510869 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.520134 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.526383 5110 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.527575 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecd0515-69bc-4e35-9cac-3edd40468f1f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://30ae138b7d197dd54752f6c812a2ea9e7f23f015060d7f1c186abfe872d810a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e8cd7582659a048bc238ce9753c82ac7555698b76159d5cf809cfd7eddfa8d7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8cd7582659a048bc238ce9753c82ac7555698b76159d5cf809cfd7eddfa8d7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.537531 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.539267 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7e97ec-d5be-4c28-bde0-55be95e4d947\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4c0ec4f07f5bc1820f2ebd1ad017e35f029e5b8166971c3676b9f348147456cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://201fbacf5107a565f88c0b909a2d52c716cda4ea473eb26d2c613fe248ecf434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://74a99588daa785243f933c28a11ac41cbdab81eb6fa291c097f633a57d2e90df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4c060a49b2c9f19914a87af1d9e1860fcf614171795c22fcd1fb629d84d3df5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.545518 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.545564 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.545578 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.545594 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.545606 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:26Z","lastTransitionTime":"2026-01-26T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.551448 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.560093 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.563811 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.567300 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.578873 5110 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:26 crc kubenswrapper[5110]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Jan 26 00:10:26 crc kubenswrapper[5110]: set -o allexport Jan 26 00:10:26 crc kubenswrapper[5110]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 26 00:10:26 crc kubenswrapper[5110]: source /etc/kubernetes/apiserver-url.env Jan 26 00:10:26 crc kubenswrapper[5110]: else Jan 26 00:10:26 crc kubenswrapper[5110]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 26 00:10:26 crc kubenswrapper[5110]: exit 1 Jan 26 00:10:26 crc kubenswrapper[5110]: fi Jan 26 00:10:26 crc kubenswrapper[5110]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 26 00:10:26 crc kubenswrapper[5110]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:26 crc kubenswrapper[5110]: > logger="UnhandledError" Jan 26 00:10:26 crc kubenswrapper[5110]: W0126 00:10:26.579019 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc4541ce_7789_4670_bc75_5c2868e52ce0.slice/crio-71de809b345a29add85b79ac29f6321c03d81ef5e19d9c7a5e695eb859b98bb5 WatchSource:0}: Error finding container 71de809b345a29add85b79ac29f6321c03d81ef5e19d9c7a5e695eb859b98bb5: Status 404 returned error can't find the container with id 71de809b345a29add85b79ac29f6321c03d81ef5e19d9c7a5e695eb859b98bb5 Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.580117 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77v2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db003609-47aa-4a6a-a7ec-6dbc03ded29a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77v2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.580421 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.581714 5110 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:26 crc kubenswrapper[5110]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 26 00:10:26 crc kubenswrapper[5110]: if [[ -f "/env/_master" ]]; then Jan 26 00:10:26 crc kubenswrapper[5110]: set -o allexport Jan 26 00:10:26 crc kubenswrapper[5110]: source "/env/_master" Jan 26 00:10:26 crc kubenswrapper[5110]: set +o allexport Jan 26 00:10:26 crc kubenswrapper[5110]: fi Jan 26 00:10:26 crc kubenswrapper[5110]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 26 00:10:26 crc kubenswrapper[5110]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 26 00:10:26 crc kubenswrapper[5110]: ho_enable="--enable-hybrid-overlay" Jan 26 00:10:26 crc kubenswrapper[5110]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 26 00:10:26 crc kubenswrapper[5110]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 26 00:10:26 crc kubenswrapper[5110]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 26 00:10:26 crc kubenswrapper[5110]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 26 00:10:26 crc kubenswrapper[5110]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 26 00:10:26 crc kubenswrapper[5110]: --webhook-host=127.0.0.1 \ Jan 26 00:10:26 crc kubenswrapper[5110]: --webhook-port=9743 \ Jan 26 00:10:26 crc kubenswrapper[5110]: ${ho_enable} \ Jan 26 00:10:26 crc kubenswrapper[5110]: --enable-interconnect \ Jan 26 00:10:26 crc kubenswrapper[5110]: --disable-approver \ Jan 26 00:10:26 crc kubenswrapper[5110]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 26 00:10:26 crc kubenswrapper[5110]: --wait-for-kubernetes-api=200s \ Jan 26 00:10:26 crc kubenswrapper[5110]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 26 00:10:26 crc kubenswrapper[5110]: --loglevel="${LOGLEVEL}" Jan 26 00:10:26 crc kubenswrapper[5110]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:26 crc kubenswrapper[5110]: > logger="UnhandledError" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.584050 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.584085 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.584109 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.584131 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.584589 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.584588 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.584675 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.584700 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.584739 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.584762 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.584934 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.585259 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.585406 5110 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:26 crc kubenswrapper[5110]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 26 00:10:26 crc kubenswrapper[5110]: if [[ -f "/env/_master" ]]; then Jan 26 00:10:26 crc kubenswrapper[5110]: set -o allexport Jan 26 00:10:26 crc kubenswrapper[5110]: source "/env/_master" Jan 26 00:10:26 crc kubenswrapper[5110]: set +o allexport Jan 26 00:10:26 crc kubenswrapper[5110]: fi Jan 26 00:10:26 crc kubenswrapper[5110]: Jan 26 00:10:26 crc kubenswrapper[5110]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 26 00:10:26 crc kubenswrapper[5110]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 26 00:10:26 crc kubenswrapper[5110]: --disable-webhook \ Jan 26 00:10:26 crc kubenswrapper[5110]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 26 00:10:26 crc kubenswrapper[5110]: --loglevel="${LOGLEVEL}" Jan 26 00:10:26 crc kubenswrapper[5110]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:26 crc kubenswrapper[5110]: > logger="UnhandledError" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.584990 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.585587 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.585815 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.586116 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.586133 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.586308 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.586339 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.586355 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.586457 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.586544 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.586508 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.586582 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.586818 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.586780 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.586935 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.586972 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.587008 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.587119 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.587156 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.587189 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.587221 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.587255 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.587282 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.587311 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.587336 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.587362 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.587354 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.587389 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.587421 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.587450 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.587222 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.587509 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.587679 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.587969 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.588344 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.588442 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.588501 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.588103 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.589020 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.589170 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.589372 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:27.089334103 +0000 UTC m=+84.318232712 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.589501 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.589575 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.589613 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.589642 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.589676 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.589707 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.589735 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.589768 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.589814 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.589883 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.589911 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.589944 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.589981 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591017 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591048 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591080 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591109 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591127 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591146 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591164 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591183 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591203 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591222 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591245 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591267 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591285 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591303 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591323 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591343 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591359 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591378 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591395 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591449 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591472 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591520 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591536 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591563 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591580 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591625 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591655 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591681 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591707 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591724 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591742 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591760 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591776 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591815 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591835 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591856 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591900 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591924 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591945 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.592021 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.592042 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.592062 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.592410 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-qgzzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f11dce8d-124f-497f-96a2-11dd1dddd26d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mlqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mlqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-qgzzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.592566 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.592623 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.592661 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.592691 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.592721 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.592762 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.592814 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.592848 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.592882 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.592915 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.592943 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.593002 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.593033 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.593059 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.593087 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.593131 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.594080 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.594401 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.594435 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.594782 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.594836 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.594864 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.594895 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.594925 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.594957 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.594987 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.595015 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.595040 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.595075 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.595105 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.595140 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.595168 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.590351 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.590629 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.590972 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591117 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591332 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.595280 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.595353 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591338 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591405 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591435 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591726 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.591917 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.592019 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.592026 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.595442 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.592167 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.592239 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.592248 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.592460 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.592535 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.592586 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.592765 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.593010 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.593055 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.593096 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.593266 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.593302 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.593373 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.593591 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.593674 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.595575 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.593757 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.593676 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.593907 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.594568 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.594618 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.594934 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.595098 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.595670 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.595723 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.595727 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.595759 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.595811 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.595200 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.595883 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.595939 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.595966 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.596063 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.596101 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.596142 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.596178 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.596214 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.596254 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.596290 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.596330 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.596779 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.596849 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.596928 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.595468 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.597078 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.594105 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.595846 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.595873 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.595911 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.596363 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.596405 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.596431 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.596450 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.596527 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.597199 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.596567 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.596583 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.596609 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.597472 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.597485 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.597613 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.597758 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.597676 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.597939 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.597960 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.597977 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.597996 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.598024 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.598034 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.598044 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.598091 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.598119 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.598144 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.598166 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.598760 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.598874 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.598942 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.599037 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.599262 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.599279 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.599291 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.599420 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.599584 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.599649 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.599725 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.599757 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.599779 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.599921 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.599960 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.599958 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.599991 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.600025 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.600055 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.600084 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.600114 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.600139 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.600177 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.600177 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.600183 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.600210 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.600238 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.600268 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.600296 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.600324 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.600362 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.600391 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.600423 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.600453 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.600488 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.600530 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.600559 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.600585 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.600622 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.600648 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.600672 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.600707 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.600742 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.600773 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.600827 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.600868 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.600904 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.600924 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.600969 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.600995 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.601022 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.601048 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.601077 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.601108 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.601140 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.601171 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.601203 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.601239 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.601265 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.601295 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.601326 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.601353 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.601379 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.601403 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.601478 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.601498 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.601525 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.601592 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.601617 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.601644 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.601673 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.601813 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.601839 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.601869 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.601895 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.601919 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.601944 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.601970 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.602000 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.602023 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.602048 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.602070 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.602095 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.602122 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.602148 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.602173 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.602195 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.602218 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.602247 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.600210 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.600834 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.597122 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.601046 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.601135 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.601147 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.596213 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.601152 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.601624 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.601649 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.601695 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.602110 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.602288 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.602299 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.602475 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.602495 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.602586 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.602638 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.603055 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.603241 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.603269 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.603304 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.603628 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.603660 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.604152 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.604208 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.604564 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.604629 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.604700 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.604700 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.604767 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.605047 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f11dce8d-124f-497f-96a2-11dd1dddd26d-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-qgzzt\" (UID: \"f11dce8d-124f-497f-96a2-11dd1dddd26d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-qgzzt" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.605094 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mlqb\" (UniqueName: \"kubernetes.io/projected/f11dce8d-124f-497f-96a2-11dd1dddd26d-kube-api-access-4mlqb\") pod \"ovnkube-control-plane-57b78d8988-qgzzt\" (UID: \"f11dce8d-124f-497f-96a2-11dd1dddd26d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-qgzzt" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.605128 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-systemd-units\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.605156 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-run-netns\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.605187 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-node-log\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.605214 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-cni-bin\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.605266 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-etc-kubernetes\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.605296 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c2cba3eb-9a27-49a0-a3e6-645a8853c027-env-overrides\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.605321 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/cf1d78e4-c2f4-47e1-ae36-612264d9d70c-tmp-dir\") pod \"node-resolver-6wqjg\" (UID: \"cf1d78e4-c2f4-47e1-ae36-612264d9d70c\") " pod="openshift-dns/node-resolver-6wqjg" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.605370 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-log-socket\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.605383 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.605391 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.605414 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-run-ovn-kubernetes\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.605451 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngjmd\" (UniqueName: \"kubernetes.io/projected/f15bed73-d669-439f-9828-7b952d9bfe65-kube-api-access-ngjmd\") pod \"machine-config-daemon-c6tpr\" (UID: \"f15bed73-d669-439f-9828-7b952d9bfe65\") " pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.605490 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-system-cni-dir\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.605501 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.605523 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-host-var-lib-cni-multus\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.605560 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b1bb7325-6bd2-4a72-aa8b-79cc2e3d821b-host\") pod \"node-ca-p427g\" (UID: \"b1bb7325-6bd2-4a72-aa8b-79cc2e3d821b\") " pod="openshift-image-registry/node-ca-p427g" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.605600 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdlbt\" (UniqueName: \"kubernetes.io/projected/cf1d78e4-c2f4-47e1-ae36-612264d9d70c-kube-api-access-mdlbt\") pod \"node-resolver-6wqjg\" (UID: \"cf1d78e4-c2f4-47e1-ae36-612264d9d70c\") " pod="openshift-dns/node-resolver-6wqjg" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.605657 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6rnc\" (UniqueName: \"kubernetes.io/projected/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-kube-api-access-h6rnc\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.605690 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-multus-cni-dir\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.605732 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zsxq\" (UniqueName: \"kubernetes.io/projected/040d3d5f-c02a-4a70-92af-70700fd9e3c3-kube-api-access-2zsxq\") pod \"network-metrics-daemon-8ndzr\" (UID: \"040d3d5f-c02a-4a70-92af-70700fd9e3c3\") " pod="openshift-multus/network-metrics-daemon-8ndzr" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.605764 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-run-openvswitch\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.605854 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-os-release\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.606841 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/040d3d5f-c02a-4a70-92af-70700fd9e3c3-metrics-certs\") pod \"network-metrics-daemon-8ndzr\" (UID: \"040d3d5f-c02a-4a70-92af-70700fd9e3c3\") " pod="openshift-multus/network-metrics-daemon-8ndzr" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.606947 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-run-ovn\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.606986 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-cni-netd\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.607115 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-cni-binary-copy\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.607146 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-multus-socket-dir-parent\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.607180 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-multus-daemon-config\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.607210 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-kubelet\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.607210 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.607273 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f11dce8d-124f-497f-96a2-11dd1dddd26d-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-qgzzt\" (UID: \"f11dce8d-124f-497f-96a2-11dd1dddd26d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-qgzzt" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.607309 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48xmx\" (UniqueName: \"kubernetes.io/projected/b1bb7325-6bd2-4a72-aa8b-79cc2e3d821b-kube-api-access-48xmx\") pod \"node-ca-p427g\" (UID: \"b1bb7325-6bd2-4a72-aa8b-79cc2e3d821b\") " pod="openshift-image-registry/node-ca-p427g" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.607350 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.607385 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kp4q\" (UniqueName: \"kubernetes.io/projected/c2cba3eb-9a27-49a0-a3e6-645a8853c027-kube-api-access-6kp4q\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.607417 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f15bed73-d669-439f-9828-7b952d9bfe65-proxy-tls\") pod \"machine-config-daemon-c6tpr\" (UID: \"f15bed73-d669-439f-9828-7b952d9bfe65\") " pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.607499 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-cnibin\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.607566 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-host-var-lib-cni-bin\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.607592 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-hostroot\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.607621 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b1bb7325-6bd2-4a72-aa8b-79cc2e3d821b-serviceca\") pod \"node-ca-p427g\" (UID: \"b1bb7325-6bd2-4a72-aa8b-79cc2e3d821b\") " pod="openshift-image-registry/node-ca-p427g" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.607660 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-host-run-k8s-cni-cncf-io\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.607690 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-host-run-netns\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.607721 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-run-systemd\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.607752 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-etc-openvswitch\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.607783 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/cf1d78e4-c2f4-47e1-ae36-612264d9d70c-hosts-file\") pod \"node-resolver-6wqjg\" (UID: \"cf1d78e4-c2f4-47e1-ae36-612264d9d70c\") " pod="openshift-dns/node-resolver-6wqjg" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.607859 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.607868 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-host-var-lib-kubelet\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.607928 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.607967 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-multus-conf-dir\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.607981 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.608040 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-host-run-multus-certs\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.608065 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.608225 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.608502 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.608772 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.609198 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.609930 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.610110 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.595454 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.610299 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.610552 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.610667 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.610910 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.611024 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.611044 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.611368 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.611671 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.611996 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.612149 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.612629 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.612636 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.612648 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.612899 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.613062 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.613013 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.613115 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.613153 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.610960 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.608077 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-var-lib-openvswitch\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.614258 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.614357 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.614528 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.614576 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.614908 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.614947 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.615054 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.615377 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.615527 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.616687 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.617917 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.617913 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.616829 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.617953 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.617449 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8ndzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"040d3d5f-c02a-4a70-92af-70700fd9e3c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zsxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zsxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8ndzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.618149 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.618170 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.618173 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.617824 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.618300 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.618216 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.617835 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.618151 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.618677 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.618738 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.618833 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.618835 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c2cba3eb-9a27-49a0-a3e6-645a8853c027-ovnkube-script-lib\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.618959 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.619037 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.619199 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.619316 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.619462 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.619547 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.619724 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.619910 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.619021 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f11dce8d-124f-497f-96a2-11dd1dddd26d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-qgzzt\" (UID: \"f11dce8d-124f-497f-96a2-11dd1dddd26d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-qgzzt" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.620102 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.620118 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.620330 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.620286 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-slash\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.620488 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.620567 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c2cba3eb-9a27-49a0-a3e6-645a8853c027-ovnkube-config\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.620697 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c2cba3eb-9a27-49a0-a3e6-645a8853c027-ovn-node-metrics-cert\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.620771 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f15bed73-d669-439f-9828-7b952d9bfe65-mcd-auth-proxy-config\") pod \"machine-config-daemon-c6tpr\" (UID: \"f15bed73-d669-439f-9828-7b952d9bfe65\") " pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.620721 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.620901 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f15bed73-d669-439f-9828-7b952d9bfe65-rootfs\") pod \"machine-config-daemon-c6tpr\" (UID: \"f15bed73-d669-439f-9828-7b952d9bfe65\") " pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.620728 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.620751 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.620803 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.620878 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.621697 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.621707 5110 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.621747 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.621019 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.621828 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.621856 5110 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.621868 5110 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.621904 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.621918 5110 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.621934 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.621948 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.621963 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.621974 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.621987 5110 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.621992 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622003 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622023 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622047 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622070 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622085 5110 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622100 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622116 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622130 5110 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622144 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622155 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622167 5110 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622180 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622194 5110 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622208 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622220 5110 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622233 5110 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622229 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622246 5110 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622261 5110 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622275 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622287 5110 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622300 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622313 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622325 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622337 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622332 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622349 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622363 5110 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622375 5110 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622388 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622402 5110 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622473 5110 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622484 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622495 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622506 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622517 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622532 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622546 5110 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622559 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622570 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622581 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622592 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622603 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622616 5110 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622626 5110 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622636 5110 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622651 5110 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622663 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622673 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622686 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622698 5110 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622711 5110 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622721 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622733 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622747 5110 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622758 5110 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622769 5110 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622782 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622810 5110 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622826 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622839 5110 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622851 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622864 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622877 5110 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622880 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622889 5110 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622916 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622935 5110 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622957 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622973 5110 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622989 5110 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623004 5110 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.622998 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623018 5110 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623032 5110 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623045 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623060 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623074 5110 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623091 5110 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623104 5110 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623116 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623128 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623142 5110 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623159 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623171 5110 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623187 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623186 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623204 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623248 5110 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623260 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623269 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623280 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623290 5110 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623410 5110 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623424 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623437 5110 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623448 5110 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623457 5110 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623466 5110 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623476 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623485 5110 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623495 5110 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623505 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623515 5110 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623525 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623534 5110 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623544 5110 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623557 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623567 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623580 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623591 5110 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623601 5110 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623611 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623620 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623629 5110 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623638 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623650 5110 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623659 5110 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623668 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623679 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623688 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623697 5110 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623705 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623715 5110 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623724 5110 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623733 5110 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623744 5110 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623759 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623776 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623788 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623833 5110 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623845 5110 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623855 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623857 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623864 5110 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623925 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623941 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623953 5110 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623967 5110 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623977 5110 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.623989 5110 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624002 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624014 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624150 5110 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624163 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624174 5110 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624185 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624200 5110 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624210 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624221 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624233 5110 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624242 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624253 5110 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624263 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624279 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624293 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624176 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624304 5110 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624217 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624321 5110 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624356 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624373 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624387 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624399 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624409 5110 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624420 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624430 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624440 5110 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624450 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624460 5110 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624470 5110 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624480 5110 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624491 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624501 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624513 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624523 5110 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624533 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624544 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624557 5110 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624568 5110 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624581 5110 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624593 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624603 5110 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624613 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624625 5110 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624636 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624647 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624657 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624667 5110 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624678 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624692 5110 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624702 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624712 5110 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.624723 5110 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.625212 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.625281 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.625393 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.626913 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.629839 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.639385 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.642494 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd7963f-3ea7-4581-8c42-0bd2de1e5540\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://7e90c7cb0f7f52fbb22c7c05dda374c6044ece3a6eb021c481437ea8f5de1298\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://2277f081ca24900f20ca43d5a74890c20bee6e3ec8ad9e4309f66cf96678660e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5241883a28ef3e6af36e6b5a5cca557f4bdb4444cdb5ba3cd00b1e9f28243d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f0a980f89ffe1688954fbe60c16faeaf282be70e3d5a3968f74311a125158488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://62c4e498d0857df9b3c9615022c95ca7ce1497c476eb255d3275604912cf63f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9556ace6d728cde3159c1c8e97e12787903a3279cba1610d592fe244eff5ae5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9556ace6d728cde3159c1c8e97e12787903a3279cba1610d592fe244eff5ae5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8339480b8ee693e38bc6660f0b37482ff26d091553c39db9cf75f4f8b95f96fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8339480b8ee693e38bc6660f0b37482ff26d091553c39db9cf75f4f8b95f96fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://fb3dc595d8f46c045ba87b7515cb2d4576bb2237750740db45c461f69d392c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb3dc595d8f46c045ba87b7515cb2d4576bb2237750740db45c461f69d392c7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.648546 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.648719 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.648789 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.648844 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.648872 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.648891 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:26Z","lastTransitionTime":"2026-01-26T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.654540 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.658782 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.664413 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.669304 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-6wqjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d78e4-c2f4-47e1-ae36-612264d9d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdlbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6wqjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.675366 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"71de809b345a29add85b79ac29f6321c03d81ef5e19d9c7a5e695eb859b98bb5"} Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.676199 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"ab9b3d2772d4b98372884f45e39fb52b9418f2b8709bf6a87362080bbc65c8b5"} Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.677417 5110 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:26 crc kubenswrapper[5110]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 26 00:10:26 crc kubenswrapper[5110]: if [[ -f "/env/_master" ]]; then Jan 26 00:10:26 crc kubenswrapper[5110]: set -o allexport Jan 26 00:10:26 crc kubenswrapper[5110]: source "/env/_master" Jan 26 00:10:26 crc kubenswrapper[5110]: set +o allexport Jan 26 00:10:26 crc kubenswrapper[5110]: fi Jan 26 00:10:26 crc kubenswrapper[5110]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 26 00:10:26 crc kubenswrapper[5110]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 26 00:10:26 crc kubenswrapper[5110]: ho_enable="--enable-hybrid-overlay" Jan 26 00:10:26 crc kubenswrapper[5110]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 26 00:10:26 crc kubenswrapper[5110]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 26 00:10:26 crc kubenswrapper[5110]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 26 00:10:26 crc kubenswrapper[5110]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 26 00:10:26 crc kubenswrapper[5110]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 26 00:10:26 crc kubenswrapper[5110]: --webhook-host=127.0.0.1 \ Jan 26 00:10:26 crc kubenswrapper[5110]: --webhook-port=9743 \ Jan 26 00:10:26 crc kubenswrapper[5110]: ${ho_enable} \ Jan 26 00:10:26 crc kubenswrapper[5110]: --enable-interconnect \ Jan 26 00:10:26 crc kubenswrapper[5110]: --disable-approver \ Jan 26 00:10:26 crc kubenswrapper[5110]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 26 00:10:26 crc kubenswrapper[5110]: --wait-for-kubernetes-api=200s \ Jan 26 00:10:26 crc kubenswrapper[5110]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 26 00:10:26 crc kubenswrapper[5110]: --loglevel="${LOGLEVEL}" Jan 26 00:10:26 crc kubenswrapper[5110]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:26 crc kubenswrapper[5110]: > logger="UnhandledError" Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.677952 5110 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:26 crc kubenswrapper[5110]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Jan 26 00:10:26 crc kubenswrapper[5110]: set -o allexport Jan 26 00:10:26 crc kubenswrapper[5110]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 26 00:10:26 crc kubenswrapper[5110]: source /etc/kubernetes/apiserver-url.env Jan 26 00:10:26 crc kubenswrapper[5110]: else Jan 26 00:10:26 crc kubenswrapper[5110]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 26 00:10:26 crc kubenswrapper[5110]: exit 1 Jan 26 00:10:26 crc kubenswrapper[5110]: fi Jan 26 00:10:26 crc kubenswrapper[5110]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 26 00:10:26 crc kubenswrapper[5110]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:26 crc kubenswrapper[5110]: > logger="UnhandledError" Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.680428 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.680633 5110 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:26 crc kubenswrapper[5110]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 26 00:10:26 crc kubenswrapper[5110]: if [[ -f "/env/_master" ]]; then Jan 26 00:10:26 crc kubenswrapper[5110]: set -o allexport Jan 26 00:10:26 crc kubenswrapper[5110]: source "/env/_master" Jan 26 00:10:26 crc kubenswrapper[5110]: set +o allexport Jan 26 00:10:26 crc kubenswrapper[5110]: fi Jan 26 00:10:26 crc kubenswrapper[5110]: Jan 26 00:10:26 crc kubenswrapper[5110]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 26 00:10:26 crc kubenswrapper[5110]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 26 00:10:26 crc kubenswrapper[5110]: --disable-webhook \ Jan 26 00:10:26 crc kubenswrapper[5110]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 26 00:10:26 crc kubenswrapper[5110]: --loglevel="${LOGLEVEL}" Jan 26 00:10:26 crc kubenswrapper[5110]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:26 crc kubenswrapper[5110]: > logger="UnhandledError" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.680813 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f15bed73-d669-439f-9828-7b952d9bfe65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngjmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngjmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6tpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.682345 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.692402 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.700779 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p427g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1bb7325-6bd2-4a72-aa8b-79cc2e3d821b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-48xmx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p427g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.715889 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.717703 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2cba3eb-9a27-49a0-a3e6-645a8853c027\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnkth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.726449 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f11dce8d-124f-497f-96a2-11dd1dddd26d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-qgzzt\" (UID: \"f11dce8d-124f-497f-96a2-11dd1dddd26d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-qgzzt" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.726528 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-slash\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.726638 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-slash\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.726725 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c2cba3eb-9a27-49a0-a3e6-645a8853c027-ovnkube-config\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.726827 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c2cba3eb-9a27-49a0-a3e6-645a8853c027-ovn-node-metrics-cert\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.726873 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f15bed73-d669-439f-9828-7b952d9bfe65-mcd-auth-proxy-config\") pod \"machine-config-daemon-c6tpr\" (UID: \"f15bed73-d669-439f-9828-7b952d9bfe65\") " pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.726929 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f15bed73-d669-439f-9828-7b952d9bfe65-rootfs\") pod \"machine-config-daemon-c6tpr\" (UID: \"f15bed73-d669-439f-9828-7b952d9bfe65\") " pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.726973 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f11dce8d-124f-497f-96a2-11dd1dddd26d-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-qgzzt\" (UID: \"f11dce8d-124f-497f-96a2-11dd1dddd26d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-qgzzt" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.727004 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4mlqb\" (UniqueName: \"kubernetes.io/projected/f11dce8d-124f-497f-96a2-11dd1dddd26d-kube-api-access-4mlqb\") pod \"ovnkube-control-plane-57b78d8988-qgzzt\" (UID: \"f11dce8d-124f-497f-96a2-11dd1dddd26d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-qgzzt" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.727034 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-systemd-units\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.727043 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f15bed73-d669-439f-9828-7b952d9bfe65-rootfs\") pod \"machine-config-daemon-c6tpr\" (UID: \"f15bed73-d669-439f-9828-7b952d9bfe65\") " pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.727060 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-run-netns\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.727100 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-node-log\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.727123 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-run-netns\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.727147 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-cni-bin\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.727123 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-cni-bin\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.727178 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-node-log\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.727217 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-systemd-units\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.727219 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-etc-kubernetes\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.727252 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c2cba3eb-9a27-49a0-a3e6-645a8853c027-env-overrides\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.727277 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/cf1d78e4-c2f4-47e1-ae36-612264d9d70c-tmp-dir\") pod \"node-resolver-6wqjg\" (UID: \"cf1d78e4-c2f4-47e1-ae36-612264d9d70c\") " pod="openshift-dns/node-resolver-6wqjg" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.727307 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-log-socket\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.727333 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-run-ovn-kubernetes\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.727359 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ngjmd\" (UniqueName: \"kubernetes.io/projected/f15bed73-d669-439f-9828-7b952d9bfe65-kube-api-access-ngjmd\") pod \"machine-config-daemon-c6tpr\" (UID: \"f15bed73-d669-439f-9828-7b952d9bfe65\") " pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.727393 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-system-cni-dir\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.727420 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-host-var-lib-cni-multus\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.727462 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b1bb7325-6bd2-4a72-aa8b-79cc2e3d821b-host\") pod \"node-ca-p427g\" (UID: \"b1bb7325-6bd2-4a72-aa8b-79cc2e3d821b\") " pod="openshift-image-registry/node-ca-p427g" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.727491 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mdlbt\" (UniqueName: \"kubernetes.io/projected/cf1d78e4-c2f4-47e1-ae36-612264d9d70c-kube-api-access-mdlbt\") pod \"node-resolver-6wqjg\" (UID: \"cf1d78e4-c2f4-47e1-ae36-612264d9d70c\") " pod="openshift-dns/node-resolver-6wqjg" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.727517 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-system-cni-dir\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.727526 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h6rnc\" (UniqueName: \"kubernetes.io/projected/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-kube-api-access-h6rnc\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.728199 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-multus-cni-dir\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.728243 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2zsxq\" (UniqueName: \"kubernetes.io/projected/040d3d5f-c02a-4a70-92af-70700fd9e3c3-kube-api-access-2zsxq\") pod \"network-metrics-daemon-8ndzr\" (UID: \"040d3d5f-c02a-4a70-92af-70700fd9e3c3\") " pod="openshift-multus/network-metrics-daemon-8ndzr" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.728273 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-run-openvswitch\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.728317 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-os-release\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.728364 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/040d3d5f-c02a-4a70-92af-70700fd9e3c3-metrics-certs\") pod \"network-metrics-daemon-8ndzr\" (UID: \"040d3d5f-c02a-4a70-92af-70700fd9e3c3\") " pod="openshift-multus/network-metrics-daemon-8ndzr" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.728393 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-run-ovn\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.728415 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-cni-netd\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.728453 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-cni-binary-copy\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.728479 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-multus-socket-dir-parent\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.728522 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-multus-daemon-config\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.728547 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-kubelet\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.728581 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f11dce8d-124f-497f-96a2-11dd1dddd26d-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-qgzzt\" (UID: \"f11dce8d-124f-497f-96a2-11dd1dddd26d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-qgzzt" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.728611 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-48xmx\" (UniqueName: \"kubernetes.io/projected/b1bb7325-6bd2-4a72-aa8b-79cc2e3d821b-kube-api-access-48xmx\") pod \"node-ca-p427g\" (UID: \"b1bb7325-6bd2-4a72-aa8b-79cc2e3d821b\") " pod="openshift-image-registry/node-ca-p427g" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.728636 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.728662 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp4q\" (UniqueName: \"kubernetes.io/projected/c2cba3eb-9a27-49a0-a3e6-645a8853c027-kube-api-access-6kp4q\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.728688 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f15bed73-d669-439f-9828-7b952d9bfe65-proxy-tls\") pod \"machine-config-daemon-c6tpr\" (UID: \"f15bed73-d669-439f-9828-7b952d9bfe65\") " pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.728704 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-run-ovn\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.728731 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-cnibin\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.727704 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-host-var-lib-cni-multus\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.728097 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/cf1d78e4-c2f4-47e1-ae36-612264d9d70c-tmp-dir\") pod \"node-resolver-6wqjg\" (UID: \"cf1d78e4-c2f4-47e1-ae36-612264d9d70c\") " pod="openshift-dns/node-resolver-6wqjg" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.727516 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-etc-kubernetes\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.728834 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-host-var-lib-cni-bin\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.728868 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-hostroot\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.727716 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b1bb7325-6bd2-4a72-aa8b-79cc2e3d821b-host\") pod \"node-ca-p427g\" (UID: \"b1bb7325-6bd2-4a72-aa8b-79cc2e3d821b\") " pod="openshift-image-registry/node-ca-p427g" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.728956 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b1bb7325-6bd2-4a72-aa8b-79cc2e3d821b-serviceca\") pod \"node-ca-p427g\" (UID: \"b1bb7325-6bd2-4a72-aa8b-79cc2e3d821b\") " pod="openshift-image-registry/node-ca-p427g" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.728993 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-os-release\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.728991 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-host-run-k8s-cni-cncf-io\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.729045 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-host-run-netns\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.727859 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-jh4hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6rnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jh4hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.729127 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-run-systemd\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.727900 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c2cba3eb-9a27-49a0-a3e6-645a8853c027-env-overrides\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.729138 5110 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.729205 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-kubelet\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.729072 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-run-systemd\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.729386 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-etc-openvswitch\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.729426 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/cf1d78e4-c2f4-47e1-ae36-612264d9d70c-hosts-file\") pod \"node-resolver-6wqjg\" (UID: \"cf1d78e4-c2f4-47e1-ae36-612264d9d70c\") " pod="openshift-dns/node-resolver-6wqjg" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.729458 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-host-var-lib-kubelet\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.729529 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-multus-conf-dir\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.729627 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-host-run-multus-certs\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.727590 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-log-socket\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.727675 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-run-ovn-kubernetes\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.729708 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-var-lib-openvswitch\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.729738 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-run-openvswitch\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.729954 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c2cba3eb-9a27-49a0-a3e6-645a8853c027-ovnkube-script-lib\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.730021 5110 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.730049 5110 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.730064 5110 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.730078 5110 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.730092 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.730107 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.730119 5110 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.730133 5110 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.730148 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.730163 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.730177 5110 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.730182 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-cni-netd\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.730191 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.730204 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.730216 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.730242 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c2cba3eb-9a27-49a0-a3e6-645a8853c027-ovnkube-config\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.730280 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-cni-binary-copy\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.730260 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.730318 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.730336 5110 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.730350 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.730363 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.730376 5110 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.730387 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.730423 5110 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.730438 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.730450 5110 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.730462 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.730475 5110 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.730144 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-multus-daemon-config\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.730287 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-host-var-lib-kubelet\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.730522 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/040d3d5f-c02a-4a70-92af-70700fd9e3c3-metrics-certs podName:040d3d5f-c02a-4a70-92af-70700fd9e3c3 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:27.230488851 +0000 UTC m=+84.459387470 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/040d3d5f-c02a-4a70-92af-70700fd9e3c3-metrics-certs") pod "network-metrics-daemon-8ndzr" (UID: "040d3d5f-c02a-4a70-92af-70700fd9e3c3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.728862 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-multus-cni-dir\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.730771 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f11dce8d-124f-497f-96a2-11dd1dddd26d-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-qgzzt\" (UID: \"f11dce8d-124f-497f-96a2-11dd1dddd26d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-qgzzt" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.730249 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-etc-openvswitch\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.730951 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/cf1d78e4-c2f4-47e1-ae36-612264d9d70c-hosts-file\") pod \"node-resolver-6wqjg\" (UID: \"cf1d78e4-c2f4-47e1-ae36-612264d9d70c\") " pod="openshift-dns/node-resolver-6wqjg" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.731005 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-multus-conf-dir\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.731102 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f15bed73-d669-439f-9828-7b952d9bfe65-mcd-auth-proxy-config\") pod \"machine-config-daemon-c6tpr\" (UID: \"f15bed73-d669-439f-9828-7b952d9bfe65\") " pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.731163 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-var-lib-openvswitch\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.731264 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-multus-socket-dir-parent\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.731322 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-host-run-netns\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.731390 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.731141 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-host-run-multus-certs\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.731424 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-cnibin\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.731495 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-host-var-lib-cni-bin\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.731549 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-host-run-k8s-cni-cncf-io\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.731586 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-hostroot\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.731669 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c2cba3eb-9a27-49a0-a3e6-645a8853c027-ovnkube-script-lib\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: W0126 00:10:26.731675 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod428b39f5_eb1c_4f65_b7a4_eeb6e84860cc.slice/crio-1864dd0fa4921a0f61ea01af6d7f74595fc9a73ac3f81925fb0aa44f6e64b2b7 WatchSource:0}: Error finding container 1864dd0fa4921a0f61ea01af6d7f74595fc9a73ac3f81925fb0aa44f6e64b2b7: Status 404 returned error can't find the container with id 1864dd0fa4921a0f61ea01af6d7f74595fc9a73ac3f81925fb0aa44f6e64b2b7 Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.732613 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f11dce8d-124f-497f-96a2-11dd1dddd26d-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-qgzzt\" (UID: \"f11dce8d-124f-497f-96a2-11dd1dddd26d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-qgzzt" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.733584 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b1bb7325-6bd2-4a72-aa8b-79cc2e3d821b-serviceca\") pod \"node-ca-p427g\" (UID: \"b1bb7325-6bd2-4a72-aa8b-79cc2e3d821b\") " pod="openshift-image-registry/node-ca-p427g" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.734018 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f15bed73-d669-439f-9828-7b952d9bfe65-proxy-tls\") pod \"machine-config-daemon-c6tpr\" (UID: \"f15bed73-d669-439f-9828-7b952d9bfe65\") " pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.734431 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f11dce8d-124f-497f-96a2-11dd1dddd26d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-qgzzt\" (UID: \"f11dce8d-124f-497f-96a2-11dd1dddd26d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-qgzzt" Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.736313 5110 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.737937 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c2cba3eb-9a27-49a0-a3e6-645a8853c027-ovn-node-metrics-cert\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.738069 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.742945 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngjmd\" (UniqueName: \"kubernetes.io/projected/f15bed73-d669-439f-9828-7b952d9bfe65-kube-api-access-ngjmd\") pod \"machine-config-daemon-c6tpr\" (UID: \"f15bed73-d669-439f-9828-7b952d9bfe65\") " pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.744381 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6rnc\" (UniqueName: \"kubernetes.io/projected/f2948d2b-fac7-4f3f-8b5f-f6f9c914daec-kube-api-access-h6rnc\") pod \"multus-jh4hk\" (UID: \"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\") " pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.744461 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdlbt\" (UniqueName: \"kubernetes.io/projected/cf1d78e4-c2f4-47e1-ae36-612264d9d70c-kube-api-access-mdlbt\") pod \"node-resolver-6wqjg\" (UID: \"cf1d78e4-c2f4-47e1-ae36-612264d9d70c\") " pod="openshift-dns/node-resolver-6wqjg" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.745671 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf35b8e-a6c5-479a-860f-db0308fb993b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://a67e63f2b2d217c10ec50b836c9cca85c134aef230c27dbb1ae8139a6dddd72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://edc58dbb97e7a54abf71095739d515edea8205bf2c5e4008b22a69d40b83cb2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://44222dc6a7066064989d8823f6c0f556f4d4d8cac2f2b4ab1462cbb51db8ff07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3202209affc7d7c4869fa3cbbd7b3a166dee850e7789b407826358743c13e508\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3202209affc7d7c4869fa3cbbd7b3a166dee850e7789b407826358743c13e508\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:10:09Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW0126 00:10:09.419281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 00:10:09.419514 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0126 00:10:09.420601 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1392405372/tls.crt::/tmp/serving-cert-1392405372/tls.key\\\\\\\"\\\\nI0126 00:10:09.740266 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:10:09.742210 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:10:09.742231 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:10:09.742265 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:10:09.742270 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:10:09.746497 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:10:09.746538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:09.746544 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:09.746549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:10:09.746551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:10:09.746556 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:10:09.746559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 00:10:09.746563 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 00:10:09.749601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:10:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fc0c2347169345d3886bfc369c472fa0e2cf381c3b596a46b7de48d84e955504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2fe6116a5b9da02a5b57aaf57f7bed21247934453d936503dd835d1eac1c23b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe6116a5b9da02a5b57aaf57f7bed21247934453d936503dd835d1eac1c23b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.748206 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-77v2r" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.748220 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mlqb\" (UniqueName: \"kubernetes.io/projected/f11dce8d-124f-497f-96a2-11dd1dddd26d-kube-api-access-4mlqb\") pod \"ovnkube-control-plane-57b78d8988-qgzzt\" (UID: \"f11dce8d-124f-497f-96a2-11dd1dddd26d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-qgzzt" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.750412 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.750449 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.750460 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.750475 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.750487 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:26Z","lastTransitionTime":"2026-01-26T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.751015 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zsxq\" (UniqueName: \"kubernetes.io/projected/040d3d5f-c02a-4a70-92af-70700fd9e3c3-kube-api-access-2zsxq\") pod \"network-metrics-daemon-8ndzr\" (UID: \"040d3d5f-c02a-4a70-92af-70700fd9e3c3\") " pod="openshift-multus/network-metrics-daemon-8ndzr" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.751274 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-48xmx\" (UniqueName: \"kubernetes.io/projected/b1bb7325-6bd2-4a72-aa8b-79cc2e3d821b-kube-api-access-48xmx\") pod \"node-ca-p427g\" (UID: \"b1bb7325-6bd2-4a72-aa8b-79cc2e3d821b\") " pod="openshift-image-registry/node-ca-p427g" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.753039 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kp4q\" (UniqueName: \"kubernetes.io/projected/c2cba3eb-9a27-49a0-a3e6-645a8853c027-kube-api-access-6kp4q\") pod \"ovnkube-node-bnkth\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.756524 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-p427g" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.758376 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:26 crc kubenswrapper[5110]: W0126 00:10:26.758735 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb003609_47aa_4a6a_a7ec_6dbc03ded29a.slice/crio-cb2e581aac84844bc270a8b77f7bc1bac2ba3853baa81e27aadc94311e5b1e36 WatchSource:0}: Error finding container cb2e581aac84844bc270a8b77f7bc1bac2ba3853baa81e27aadc94311e5b1e36: Status 404 returned error can't find the container with id cb2e581aac84844bc270a8b77f7bc1bac2ba3853baa81e27aadc94311e5b1e36 Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.762938 5110 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8qwlp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-77v2r_openshift-multus(db003609-47aa-4a6a-a7ec-6dbc03ded29a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.764123 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-77v2r" podUID="db003609-47aa-4a6a-a7ec-6dbc03ded29a" Jan 26 00:10:26 crc kubenswrapper[5110]: W0126 00:10:26.769935 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1bb7325_6bd2_4a72_aa8b_79cc2e3d821b.slice/crio-a24436833c02423b1dda4b693f275b810c510d5a4093be0cb61ac7e610000441 WatchSource:0}: Error finding container a24436833c02423b1dda4b693f275b810c510d5a4093be0cb61ac7e610000441: Status 404 returned error can't find the container with id a24436833c02423b1dda4b693f275b810c510d5a4093be0cb61ac7e610000441 Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.770658 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.772475 5110 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:26 crc kubenswrapper[5110]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Jan 26 00:10:26 crc kubenswrapper[5110]: while [ true ]; Jan 26 00:10:26 crc kubenswrapper[5110]: do Jan 26 00:10:26 crc kubenswrapper[5110]: for f in $(ls /tmp/serviceca); do Jan 26 00:10:26 crc kubenswrapper[5110]: echo $f Jan 26 00:10:26 crc kubenswrapper[5110]: ca_file_path="/tmp/serviceca/${f}" Jan 26 00:10:26 crc kubenswrapper[5110]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Jan 26 00:10:26 crc kubenswrapper[5110]: reg_dir_path="/etc/docker/certs.d/${f}" Jan 26 00:10:26 crc kubenswrapper[5110]: if [ -e "${reg_dir_path}" ]; then Jan 26 00:10:26 crc kubenswrapper[5110]: cp -u $ca_file_path $reg_dir_path/ca.crt Jan 26 00:10:26 crc kubenswrapper[5110]: else Jan 26 00:10:26 crc kubenswrapper[5110]: mkdir $reg_dir_path Jan 26 00:10:26 crc kubenswrapper[5110]: cp $ca_file_path $reg_dir_path/ca.crt Jan 26 00:10:26 crc kubenswrapper[5110]: fi Jan 26 00:10:26 crc kubenswrapper[5110]: done Jan 26 00:10:26 crc kubenswrapper[5110]: for d in $(ls /etc/docker/certs.d); do Jan 26 00:10:26 crc kubenswrapper[5110]: echo $d Jan 26 00:10:26 crc kubenswrapper[5110]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Jan 26 00:10:26 crc kubenswrapper[5110]: reg_conf_path="/tmp/serviceca/${dp}" Jan 26 00:10:26 crc kubenswrapper[5110]: if [ ! -e "${reg_conf_path}" ]; then Jan 26 00:10:26 crc kubenswrapper[5110]: rm -rf /etc/docker/certs.d/$d Jan 26 00:10:26 crc kubenswrapper[5110]: fi Jan 26 00:10:26 crc kubenswrapper[5110]: done Jan 26 00:10:26 crc kubenswrapper[5110]: sleep 60 & wait ${!} Jan 26 00:10:26 crc kubenswrapper[5110]: done Jan 26 00:10:26 crc kubenswrapper[5110]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-48xmx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-p427g_openshift-image-registry(b1bb7325-6bd2-4a72-aa8b-79cc2e3d821b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:26 crc kubenswrapper[5110]: > logger="UnhandledError" Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.774601 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-p427g" podUID="b1bb7325-6bd2-4a72-aa8b-79cc2e3d821b" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.782948 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-6wqjg" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.783254 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.791673 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p427g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1bb7325-6bd2-4a72-aa8b-79cc2e3d821b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-48xmx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p427g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.792724 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-qgzzt" Jan 26 00:10:26 crc kubenswrapper[5110]: W0126 00:10:26.793877 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf1d78e4_c2f4_47e1_ae36_612264d9d70c.slice/crio-118e4b4adabf214986e8506b686f373d8b678c39e1065de30ca1e9e3371588af WatchSource:0}: Error finding container 118e4b4adabf214986e8506b686f373d8b678c39e1065de30ca1e9e3371588af: Status 404 returned error can't find the container with id 118e4b4adabf214986e8506b686f373d8b678c39e1065de30ca1e9e3371588af Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.799540 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.802563 5110 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:26 crc kubenswrapper[5110]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Jan 26 00:10:26 crc kubenswrapper[5110]: set -uo pipefail Jan 26 00:10:26 crc kubenswrapper[5110]: Jan 26 00:10:26 crc kubenswrapper[5110]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Jan 26 00:10:26 crc kubenswrapper[5110]: Jan 26 00:10:26 crc kubenswrapper[5110]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Jan 26 00:10:26 crc kubenswrapper[5110]: HOSTS_FILE="/etc/hosts" Jan 26 00:10:26 crc kubenswrapper[5110]: TEMP_FILE="/tmp/hosts.tmp" Jan 26 00:10:26 crc kubenswrapper[5110]: Jan 26 00:10:26 crc kubenswrapper[5110]: IFS=', ' read -r -a services <<< "${SERVICES}" Jan 26 00:10:26 crc kubenswrapper[5110]: Jan 26 00:10:26 crc kubenswrapper[5110]: # Make a temporary file with the old hosts file's attributes. Jan 26 00:10:26 crc kubenswrapper[5110]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Jan 26 00:10:26 crc kubenswrapper[5110]: echo "Failed to preserve hosts file. Exiting." Jan 26 00:10:26 crc kubenswrapper[5110]: exit 1 Jan 26 00:10:26 crc kubenswrapper[5110]: fi Jan 26 00:10:26 crc kubenswrapper[5110]: Jan 26 00:10:26 crc kubenswrapper[5110]: while true; do Jan 26 00:10:26 crc kubenswrapper[5110]: declare -A svc_ips Jan 26 00:10:26 crc kubenswrapper[5110]: for svc in "${services[@]}"; do Jan 26 00:10:26 crc kubenswrapper[5110]: # Fetch service IP from cluster dns if present. We make several tries Jan 26 00:10:26 crc kubenswrapper[5110]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Jan 26 00:10:26 crc kubenswrapper[5110]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Jan 26 00:10:26 crc kubenswrapper[5110]: # support UDP loadbalancers and require reaching DNS through TCP. Jan 26 00:10:26 crc kubenswrapper[5110]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 26 00:10:26 crc kubenswrapper[5110]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 26 00:10:26 crc kubenswrapper[5110]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 26 00:10:26 crc kubenswrapper[5110]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Jan 26 00:10:26 crc kubenswrapper[5110]: for i in ${!cmds[*]} Jan 26 00:10:26 crc kubenswrapper[5110]: do Jan 26 00:10:26 crc kubenswrapper[5110]: ips=($(eval "${cmds[i]}")) Jan 26 00:10:26 crc kubenswrapper[5110]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Jan 26 00:10:26 crc kubenswrapper[5110]: svc_ips["${svc}"]="${ips[@]}" Jan 26 00:10:26 crc kubenswrapper[5110]: break Jan 26 00:10:26 crc kubenswrapper[5110]: fi Jan 26 00:10:26 crc kubenswrapper[5110]: done Jan 26 00:10:26 crc kubenswrapper[5110]: done Jan 26 00:10:26 crc kubenswrapper[5110]: Jan 26 00:10:26 crc kubenswrapper[5110]: # Update /etc/hosts only if we get valid service IPs Jan 26 00:10:26 crc kubenswrapper[5110]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Jan 26 00:10:26 crc kubenswrapper[5110]: # Stale entries could exist in /etc/hosts if the service is deleted Jan 26 00:10:26 crc kubenswrapper[5110]: if [[ -n "${svc_ips[*]-}" ]]; then Jan 26 00:10:26 crc kubenswrapper[5110]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Jan 26 00:10:26 crc kubenswrapper[5110]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Jan 26 00:10:26 crc kubenswrapper[5110]: # Only continue rebuilding the hosts entries if its original content is preserved Jan 26 00:10:26 crc kubenswrapper[5110]: sleep 60 & wait Jan 26 00:10:26 crc kubenswrapper[5110]: continue Jan 26 00:10:26 crc kubenswrapper[5110]: fi Jan 26 00:10:26 crc kubenswrapper[5110]: Jan 26 00:10:26 crc kubenswrapper[5110]: # Append resolver entries for services Jan 26 00:10:26 crc kubenswrapper[5110]: rc=0 Jan 26 00:10:26 crc kubenswrapper[5110]: for svc in "${!svc_ips[@]}"; do Jan 26 00:10:26 crc kubenswrapper[5110]: for ip in ${svc_ips[${svc}]}; do Jan 26 00:10:26 crc kubenswrapper[5110]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Jan 26 00:10:26 crc kubenswrapper[5110]: done Jan 26 00:10:26 crc kubenswrapper[5110]: done Jan 26 00:10:26 crc kubenswrapper[5110]: if [[ $rc -ne 0 ]]; then Jan 26 00:10:26 crc kubenswrapper[5110]: sleep 60 & wait Jan 26 00:10:26 crc kubenswrapper[5110]: continue Jan 26 00:10:26 crc kubenswrapper[5110]: fi Jan 26 00:10:26 crc kubenswrapper[5110]: Jan 26 00:10:26 crc kubenswrapper[5110]: Jan 26 00:10:26 crc kubenswrapper[5110]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Jan 26 00:10:26 crc kubenswrapper[5110]: # Replace /etc/hosts with our modified version if needed Jan 26 00:10:26 crc kubenswrapper[5110]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Jan 26 00:10:26 crc kubenswrapper[5110]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Jan 26 00:10:26 crc kubenswrapper[5110]: fi Jan 26 00:10:26 crc kubenswrapper[5110]: sleep 60 & wait Jan 26 00:10:26 crc kubenswrapper[5110]: unset svc_ips Jan 26 00:10:26 crc kubenswrapper[5110]: done Jan 26 00:10:26 crc kubenswrapper[5110]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mdlbt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-6wqjg_openshift-dns(cf1d78e4-c2f4-47e1-ae36-612264d9d70c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:26 crc kubenswrapper[5110]: > logger="UnhandledError" Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.803841 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-6wqjg" podUID="cf1d78e4-c2f4-47e1-ae36-612264d9d70c" Jan 26 00:10:26 crc kubenswrapper[5110]: W0126 00:10:26.808065 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf11dce8d_124f_497f_96a2_11dd1dddd26d.slice/crio-b828bb804ab6cf9a77bb51fb75c6134d1b6590e4afa009431ee2df876fd9e62c WatchSource:0}: Error finding container b828bb804ab6cf9a77bb51fb75c6134d1b6590e4afa009431ee2df876fd9e62c: Status 404 returned error can't find the container with id b828bb804ab6cf9a77bb51fb75c6134d1b6590e4afa009431ee2df876fd9e62c Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.808199 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2cba3eb-9a27-49a0-a3e6-645a8853c027\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnkth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.811176 5110 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:26 crc kubenswrapper[5110]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Jan 26 00:10:26 crc kubenswrapper[5110]: set -euo pipefail Jan 26 00:10:26 crc kubenswrapper[5110]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Jan 26 00:10:26 crc kubenswrapper[5110]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Jan 26 00:10:26 crc kubenswrapper[5110]: # As the secret mount is optional we must wait for the files to be present. Jan 26 00:10:26 crc kubenswrapper[5110]: # The service is created in monitor.yaml and this is created in sdn.yaml. Jan 26 00:10:26 crc kubenswrapper[5110]: TS=$(date +%s) Jan 26 00:10:26 crc kubenswrapper[5110]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Jan 26 00:10:26 crc kubenswrapper[5110]: HAS_LOGGED_INFO=0 Jan 26 00:10:26 crc kubenswrapper[5110]: Jan 26 00:10:26 crc kubenswrapper[5110]: log_missing_certs(){ Jan 26 00:10:26 crc kubenswrapper[5110]: CUR_TS=$(date +%s) Jan 26 00:10:26 crc kubenswrapper[5110]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Jan 26 00:10:26 crc kubenswrapper[5110]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Jan 26 00:10:26 crc kubenswrapper[5110]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Jan 26 00:10:26 crc kubenswrapper[5110]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Jan 26 00:10:26 crc kubenswrapper[5110]: HAS_LOGGED_INFO=1 Jan 26 00:10:26 crc kubenswrapper[5110]: fi Jan 26 00:10:26 crc kubenswrapper[5110]: } Jan 26 00:10:26 crc kubenswrapper[5110]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Jan 26 00:10:26 crc kubenswrapper[5110]: log_missing_certs Jan 26 00:10:26 crc kubenswrapper[5110]: sleep 5 Jan 26 00:10:26 crc kubenswrapper[5110]: done Jan 26 00:10:26 crc kubenswrapper[5110]: Jan 26 00:10:26 crc kubenswrapper[5110]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Jan 26 00:10:26 crc kubenswrapper[5110]: exec /usr/bin/kube-rbac-proxy \ Jan 26 00:10:26 crc kubenswrapper[5110]: --logtostderr \ Jan 26 00:10:26 crc kubenswrapper[5110]: --secure-listen-address=:9108 \ Jan 26 00:10:26 crc kubenswrapper[5110]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Jan 26 00:10:26 crc kubenswrapper[5110]: --upstream=http://127.0.0.1:29108/ \ Jan 26 00:10:26 crc kubenswrapper[5110]: --tls-private-key-file=${TLS_PK} \ Jan 26 00:10:26 crc kubenswrapper[5110]: --tls-cert-file=${TLS_CERT} Jan 26 00:10:26 crc kubenswrapper[5110]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4mlqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-qgzzt_openshift-ovn-kubernetes(f11dce8d-124f-497f-96a2-11dd1dddd26d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:26 crc kubenswrapper[5110]: > logger="UnhandledError" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.811992 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.814348 5110 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:26 crc kubenswrapper[5110]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 26 00:10:26 crc kubenswrapper[5110]: if [[ -f "/env/_master" ]]; then Jan 26 00:10:26 crc kubenswrapper[5110]: set -o allexport Jan 26 00:10:26 crc kubenswrapper[5110]: source "/env/_master" Jan 26 00:10:26 crc kubenswrapper[5110]: set +o allexport Jan 26 00:10:26 crc kubenswrapper[5110]: fi Jan 26 00:10:26 crc kubenswrapper[5110]: Jan 26 00:10:26 crc kubenswrapper[5110]: ovn_v4_join_subnet_opt= Jan 26 00:10:26 crc kubenswrapper[5110]: if [[ "" != "" ]]; then Jan 26 00:10:26 crc kubenswrapper[5110]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Jan 26 00:10:26 crc kubenswrapper[5110]: fi Jan 26 00:10:26 crc kubenswrapper[5110]: ovn_v6_join_subnet_opt= Jan 26 00:10:26 crc kubenswrapper[5110]: if [[ "" != "" ]]; then Jan 26 00:10:26 crc kubenswrapper[5110]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Jan 26 00:10:26 crc kubenswrapper[5110]: fi Jan 26 00:10:26 crc kubenswrapper[5110]: Jan 26 00:10:26 crc kubenswrapper[5110]: ovn_v4_transit_switch_subnet_opt= Jan 26 00:10:26 crc kubenswrapper[5110]: if [[ "" != "" ]]; then Jan 26 00:10:26 crc kubenswrapper[5110]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Jan 26 00:10:26 crc kubenswrapper[5110]: fi Jan 26 00:10:26 crc kubenswrapper[5110]: ovn_v6_transit_switch_subnet_opt= Jan 26 00:10:26 crc kubenswrapper[5110]: if [[ "" != "" ]]; then Jan 26 00:10:26 crc kubenswrapper[5110]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Jan 26 00:10:26 crc kubenswrapper[5110]: fi Jan 26 00:10:26 crc kubenswrapper[5110]: Jan 26 00:10:26 crc kubenswrapper[5110]: dns_name_resolver_enabled_flag= Jan 26 00:10:26 crc kubenswrapper[5110]: if [[ "false" == "true" ]]; then Jan 26 00:10:26 crc kubenswrapper[5110]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Jan 26 00:10:26 crc kubenswrapper[5110]: fi Jan 26 00:10:26 crc kubenswrapper[5110]: Jan 26 00:10:26 crc kubenswrapper[5110]: persistent_ips_enabled_flag="--enable-persistent-ips" Jan 26 00:10:26 crc kubenswrapper[5110]: Jan 26 00:10:26 crc kubenswrapper[5110]: # This is needed so that converting clusters from GA to TP Jan 26 00:10:26 crc kubenswrapper[5110]: # will rollout control plane pods as well Jan 26 00:10:26 crc kubenswrapper[5110]: network_segmentation_enabled_flag= Jan 26 00:10:26 crc kubenswrapper[5110]: multi_network_enabled_flag= Jan 26 00:10:26 crc kubenswrapper[5110]: if [[ "true" == "true" ]]; then Jan 26 00:10:26 crc kubenswrapper[5110]: multi_network_enabled_flag="--enable-multi-network" Jan 26 00:10:26 crc kubenswrapper[5110]: fi Jan 26 00:10:26 crc kubenswrapper[5110]: if [[ "true" == "true" ]]; then Jan 26 00:10:26 crc kubenswrapper[5110]: if [[ "true" != "true" ]]; then Jan 26 00:10:26 crc kubenswrapper[5110]: multi_network_enabled_flag="--enable-multi-network" Jan 26 00:10:26 crc kubenswrapper[5110]: fi Jan 26 00:10:26 crc kubenswrapper[5110]: network_segmentation_enabled_flag="--enable-network-segmentation" Jan 26 00:10:26 crc kubenswrapper[5110]: fi Jan 26 00:10:26 crc kubenswrapper[5110]: Jan 26 00:10:26 crc kubenswrapper[5110]: route_advertisements_enable_flag= Jan 26 00:10:26 crc kubenswrapper[5110]: if [[ "false" == "true" ]]; then Jan 26 00:10:26 crc kubenswrapper[5110]: route_advertisements_enable_flag="--enable-route-advertisements" Jan 26 00:10:26 crc kubenswrapper[5110]: fi Jan 26 00:10:26 crc kubenswrapper[5110]: Jan 26 00:10:26 crc kubenswrapper[5110]: preconfigured_udn_addresses_enable_flag= Jan 26 00:10:26 crc kubenswrapper[5110]: if [[ "false" == "true" ]]; then Jan 26 00:10:26 crc kubenswrapper[5110]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Jan 26 00:10:26 crc kubenswrapper[5110]: fi Jan 26 00:10:26 crc kubenswrapper[5110]: Jan 26 00:10:26 crc kubenswrapper[5110]: # Enable multi-network policy if configured (control-plane always full mode) Jan 26 00:10:26 crc kubenswrapper[5110]: multi_network_policy_enabled_flag= Jan 26 00:10:26 crc kubenswrapper[5110]: if [[ "false" == "true" ]]; then Jan 26 00:10:26 crc kubenswrapper[5110]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Jan 26 00:10:26 crc kubenswrapper[5110]: fi Jan 26 00:10:26 crc kubenswrapper[5110]: Jan 26 00:10:26 crc kubenswrapper[5110]: # Enable admin network policy if configured (control-plane always full mode) Jan 26 00:10:26 crc kubenswrapper[5110]: admin_network_policy_enabled_flag= Jan 26 00:10:26 crc kubenswrapper[5110]: if [[ "true" == "true" ]]; then Jan 26 00:10:26 crc kubenswrapper[5110]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Jan 26 00:10:26 crc kubenswrapper[5110]: fi Jan 26 00:10:26 crc kubenswrapper[5110]: Jan 26 00:10:26 crc kubenswrapper[5110]: if [ "shared" == "shared" ]; then Jan 26 00:10:26 crc kubenswrapper[5110]: gateway_mode_flags="--gateway-mode shared" Jan 26 00:10:26 crc kubenswrapper[5110]: elif [ "shared" == "local" ]; then Jan 26 00:10:26 crc kubenswrapper[5110]: gateway_mode_flags="--gateway-mode local" Jan 26 00:10:26 crc kubenswrapper[5110]: else Jan 26 00:10:26 crc kubenswrapper[5110]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Jan 26 00:10:26 crc kubenswrapper[5110]: exit 1 Jan 26 00:10:26 crc kubenswrapper[5110]: fi Jan 26 00:10:26 crc kubenswrapper[5110]: Jan 26 00:10:26 crc kubenswrapper[5110]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Jan 26 00:10:26 crc kubenswrapper[5110]: exec /usr/bin/ovnkube \ Jan 26 00:10:26 crc kubenswrapper[5110]: --enable-interconnect \ Jan 26 00:10:26 crc kubenswrapper[5110]: --init-cluster-manager "${K8S_NODE}" \ Jan 26 00:10:26 crc kubenswrapper[5110]: --config-file=/run/ovnkube-config/ovnkube.conf \ Jan 26 00:10:26 crc kubenswrapper[5110]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Jan 26 00:10:26 crc kubenswrapper[5110]: --metrics-bind-address "127.0.0.1:29108" \ Jan 26 00:10:26 crc kubenswrapper[5110]: --metrics-enable-pprof \ Jan 26 00:10:26 crc kubenswrapper[5110]: --metrics-enable-config-duration \ Jan 26 00:10:26 crc kubenswrapper[5110]: ${ovn_v4_join_subnet_opt} \ Jan 26 00:10:26 crc kubenswrapper[5110]: ${ovn_v6_join_subnet_opt} \ Jan 26 00:10:26 crc kubenswrapper[5110]: ${ovn_v4_transit_switch_subnet_opt} \ Jan 26 00:10:26 crc kubenswrapper[5110]: ${ovn_v6_transit_switch_subnet_opt} \ Jan 26 00:10:26 crc kubenswrapper[5110]: ${dns_name_resolver_enabled_flag} \ Jan 26 00:10:26 crc kubenswrapper[5110]: ${persistent_ips_enabled_flag} \ Jan 26 00:10:26 crc kubenswrapper[5110]: ${multi_network_enabled_flag} \ Jan 26 00:10:26 crc kubenswrapper[5110]: ${network_segmentation_enabled_flag} \ Jan 26 00:10:26 crc kubenswrapper[5110]: ${gateway_mode_flags} \ Jan 26 00:10:26 crc kubenswrapper[5110]: ${route_advertisements_enable_flag} \ Jan 26 00:10:26 crc kubenswrapper[5110]: ${preconfigured_udn_addresses_enable_flag} \ Jan 26 00:10:26 crc kubenswrapper[5110]: --enable-egress-ip=true \ Jan 26 00:10:26 crc kubenswrapper[5110]: --enable-egress-firewall=true \ Jan 26 00:10:26 crc kubenswrapper[5110]: --enable-egress-qos=true \ Jan 26 00:10:26 crc kubenswrapper[5110]: --enable-egress-service=true \ Jan 26 00:10:26 crc kubenswrapper[5110]: --enable-multicast \ Jan 26 00:10:26 crc kubenswrapper[5110]: --enable-multi-external-gateway=true \ Jan 26 00:10:26 crc kubenswrapper[5110]: ${multi_network_policy_enabled_flag} \ Jan 26 00:10:26 crc kubenswrapper[5110]: ${admin_network_policy_enabled_flag} Jan 26 00:10:26 crc kubenswrapper[5110]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4mlqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-qgzzt_openshift-ovn-kubernetes(f11dce8d-124f-497f-96a2-11dd1dddd26d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:26 crc kubenswrapper[5110]: > logger="UnhandledError" Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.815539 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-qgzzt" podUID="f11dce8d-124f-497f-96a2-11dd1dddd26d" Jan 26 00:10:26 crc kubenswrapper[5110]: W0126 00:10:26.815541 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2cba3eb_9a27_49a0_a3e6_645a8853c027.slice/crio-0846a745825adace2d108546cbd36763771fe83521c75441230842847d8e72c2 WatchSource:0}: Error finding container 0846a745825adace2d108546cbd36763771fe83521c75441230842847d8e72c2: Status 404 returned error can't find the container with id 0846a745825adace2d108546cbd36763771fe83521c75441230842847d8e72c2 Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.818227 5110 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:26 crc kubenswrapper[5110]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Jan 26 00:10:26 crc kubenswrapper[5110]: apiVersion: v1 Jan 26 00:10:26 crc kubenswrapper[5110]: clusters: Jan 26 00:10:26 crc kubenswrapper[5110]: - cluster: Jan 26 00:10:26 crc kubenswrapper[5110]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Jan 26 00:10:26 crc kubenswrapper[5110]: server: https://api-int.crc.testing:6443 Jan 26 00:10:26 crc kubenswrapper[5110]: name: default-cluster Jan 26 00:10:26 crc kubenswrapper[5110]: contexts: Jan 26 00:10:26 crc kubenswrapper[5110]: - context: Jan 26 00:10:26 crc kubenswrapper[5110]: cluster: default-cluster Jan 26 00:10:26 crc kubenswrapper[5110]: namespace: default Jan 26 00:10:26 crc kubenswrapper[5110]: user: default-auth Jan 26 00:10:26 crc kubenswrapper[5110]: name: default-context Jan 26 00:10:26 crc kubenswrapper[5110]: current-context: default-context Jan 26 00:10:26 crc kubenswrapper[5110]: kind: Config Jan 26 00:10:26 crc kubenswrapper[5110]: preferences: {} Jan 26 00:10:26 crc kubenswrapper[5110]: users: Jan 26 00:10:26 crc kubenswrapper[5110]: - name: default-auth Jan 26 00:10:26 crc kubenswrapper[5110]: user: Jan 26 00:10:26 crc kubenswrapper[5110]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 26 00:10:26 crc kubenswrapper[5110]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 26 00:10:26 crc kubenswrapper[5110]: EOF Jan 26 00:10:26 crc kubenswrapper[5110]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6kp4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-bnkth_openshift-ovn-kubernetes(c2cba3eb-9a27-49a0-a3e6-645a8853c027): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:26 crc kubenswrapper[5110]: > logger="UnhandledError" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.818338 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-jh4hk" Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.819639 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" podUID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.820462 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-jh4hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6rnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jh4hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:26 crc kubenswrapper[5110]: W0126 00:10:26.823586 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf15bed73_d669_439f_9828_7b952d9bfe65.slice/crio-8d4d72c2ca65a9357548a9f415c8b14ba57a625b2969c8a7e973bf9665af880d WatchSource:0}: Error finding container 8d4d72c2ca65a9357548a9f415c8b14ba57a625b2969c8a7e973bf9665af880d: Status 404 returned error can't find the container with id 8d4d72c2ca65a9357548a9f415c8b14ba57a625b2969c8a7e973bf9665af880d Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.827641 5110 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ngjmd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-c6tpr_openshift-machine-config-operator(f15bed73-d669-439f-9828-7b952d9bfe65): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.829877 5110 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ngjmd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-c6tpr_openshift-machine-config-operator(f15bed73-d669-439f-9828-7b952d9bfe65): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 26 00:10:26 crc kubenswrapper[5110]: W0126 00:10:26.830549 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2948d2b_fac7_4f3f_8b5f_f6f9c914daec.slice/crio-49c88247be6ea34dc42d1e4c7fc65989be56f57adae77c233b18b736a73fd161 WatchSource:0}: Error finding container 49c88247be6ea34dc42d1e4c7fc65989be56f57adae77c233b18b736a73fd161: Status 404 returned error can't find the container with id 49c88247be6ea34dc42d1e4c7fc65989be56f57adae77c233b18b736a73fd161 Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.830987 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" podUID="f15bed73-d669-439f-9828-7b952d9bfe65" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.832024 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf35b8e-a6c5-479a-860f-db0308fb993b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://a67e63f2b2d217c10ec50b836c9cca85c134aef230c27dbb1ae8139a6dddd72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://edc58dbb97e7a54abf71095739d515edea8205bf2c5e4008b22a69d40b83cb2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://44222dc6a7066064989d8823f6c0f556f4d4d8cac2f2b4ab1462cbb51db8ff07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3202209affc7d7c4869fa3cbbd7b3a166dee850e7789b407826358743c13e508\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3202209affc7d7c4869fa3cbbd7b3a166dee850e7789b407826358743c13e508\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:10:09Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW0126 00:10:09.419281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 00:10:09.419514 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0126 00:10:09.420601 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1392405372/tls.crt::/tmp/serving-cert-1392405372/tls.key\\\\\\\"\\\\nI0126 00:10:09.740266 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:10:09.742210 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:10:09.742231 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:10:09.742265 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:10:09.742270 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:10:09.746497 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:10:09.746538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:09.746544 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:09.746549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:10:09.746551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:10:09.746556 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:10:09.746559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 00:10:09.746563 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 00:10:09.749601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:10:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fc0c2347169345d3886bfc369c472fa0e2cf381c3b596a46b7de48d84e955504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2fe6116a5b9da02a5b57aaf57f7bed21247934453d936503dd835d1eac1c23b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe6116a5b9da02a5b57aaf57f7bed21247934453d936503dd835d1eac1c23b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.833298 5110 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:26 crc kubenswrapper[5110]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Jan 26 00:10:26 crc kubenswrapper[5110]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Jan 26 00:10:26 crc kubenswrapper[5110]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h6rnc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-jh4hk_openshift-multus(f2948d2b-fac7-4f3f-8b5f-f6f9c914daec): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:26 crc kubenswrapper[5110]: > logger="UnhandledError" Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.834468 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-jh4hk" podUID="f2948d2b-fac7-4f3f-8b5f-f6f9c914daec" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.852565 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.852607 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.852618 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.852637 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.852651 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:26Z","lastTransitionTime":"2026-01-26T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.863976 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.905721 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.931977 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.932096 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.932189 5110 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.932404 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.932458 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.932480 5110 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.932485 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:27.932445766 +0000 UTC m=+85.161344465 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.932579 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:27.932545479 +0000 UTC m=+85.161444098 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.932413 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.932693 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.932711 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.932727 5110 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.932745 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.932854 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:27.932843928 +0000 UTC m=+85.161742537 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.932930 5110 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:26 crc kubenswrapper[5110]: E0126 00:10:26.932993 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:27.932975301 +0000 UTC m=+85.161873920 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.945018 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecd0515-69bc-4e35-9cac-3edd40468f1f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://30ae138b7d197dd54752f6c812a2ea9e7f23f015060d7f1c186abfe872d810a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e8cd7582659a048bc238ce9753c82ac7555698b76159d5cf809cfd7eddfa8d7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8cd7582659a048bc238ce9753c82ac7555698b76159d5cf809cfd7eddfa8d7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.955365 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.955422 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.955437 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.955457 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.955472 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:26Z","lastTransitionTime":"2026-01-26T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:26 crc kubenswrapper[5110]: I0126 00:10:26.990160 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7e97ec-d5be-4c28-bde0-55be95e4d947\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4c0ec4f07f5bc1820f2ebd1ad017e35f029e5b8166971c3676b9f348147456cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://201fbacf5107a565f88c0b909a2d52c716cda4ea473eb26d2c613fe248ecf434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://74a99588daa785243f933c28a11ac41cbdab81eb6fa291c097f633a57d2e90df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4c060a49b2c9f19914a87af1d9e1860fcf614171795c22fcd1fb629d84d3df5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.026272 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.058995 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.059060 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.059074 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.059095 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.059107 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:27Z","lastTransitionTime":"2026-01-26T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.067210 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.107551 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77v2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db003609-47aa-4a6a-a7ec-6dbc03ded29a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77v2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.135419 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:27 crc kubenswrapper[5110]: E0126 00:10:27.135691 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:28.135630547 +0000 UTC m=+85.364529196 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.144217 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-qgzzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f11dce8d-124f-497f-96a2-11dd1dddd26d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mlqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mlqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-qgzzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.161140 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.161231 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.161266 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.161301 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.161323 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:27Z","lastTransitionTime":"2026-01-26T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.185854 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8ndzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"040d3d5f-c02a-4a70-92af-70700fd9e3c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zsxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zsxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8ndzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.237595 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/040d3d5f-c02a-4a70-92af-70700fd9e3c3-metrics-certs\") pod \"network-metrics-daemon-8ndzr\" (UID: \"040d3d5f-c02a-4a70-92af-70700fd9e3c3\") " pod="openshift-multus/network-metrics-daemon-8ndzr" Jan 26 00:10:27 crc kubenswrapper[5110]: E0126 00:10:27.237774 5110 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:10:27 crc kubenswrapper[5110]: E0126 00:10:27.237903 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/040d3d5f-c02a-4a70-92af-70700fd9e3c3-metrics-certs podName:040d3d5f-c02a-4a70-92af-70700fd9e3c3 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:28.237877735 +0000 UTC m=+85.466776354 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/040d3d5f-c02a-4a70-92af-70700fd9e3c3-metrics-certs") pod "network-metrics-daemon-8ndzr" (UID: "040d3d5f-c02a-4a70-92af-70700fd9e3c3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.239115 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd7963f-3ea7-4581-8c42-0bd2de1e5540\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://7e90c7cb0f7f52fbb22c7c05dda374c6044ece3a6eb021c481437ea8f5de1298\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://2277f081ca24900f20ca43d5a74890c20bee6e3ec8ad9e4309f66cf96678660e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5241883a28ef3e6af36e6b5a5cca557f4bdb4444cdb5ba3cd00b1e9f28243d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f0a980f89ffe1688954fbe60c16faeaf282be70e3d5a3968f74311a125158488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://62c4e498d0857df9b3c9615022c95ca7ce1497c476eb255d3275604912cf63f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9556ace6d728cde3159c1c8e97e12787903a3279cba1610d592fe244eff5ae5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9556ace6d728cde3159c1c8e97e12787903a3279cba1610d592fe244eff5ae5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8339480b8ee693e38bc6660f0b37482ff26d091553c39db9cf75f4f8b95f96fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8339480b8ee693e38bc6660f0b37482ff26d091553c39db9cf75f4f8b95f96fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://fb3dc595d8f46c045ba87b7515cb2d4576bb2237750740db45c461f69d392c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb3dc595d8f46c045ba87b7515cb2d4576bb2237750740db45c461f69d392c7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.263420 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.263480 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.263493 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.263512 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.263524 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:27Z","lastTransitionTime":"2026-01-26T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.266015 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dafd472-3344-4a8b-a4ef-5242709c94d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://86dfaec2435d8c668da9a0ae44fdc2a617089e161cd08dcdf2559f662c8f2b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c2eb152f744e6c7d629e5eb8590f1c2d570d8f370e9deede5c4436d336c754bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3734fa9576160d905d72573d1984bf03448ada8b30a78ff574ce89cb894632d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://46fb6b4c2b5179db0acdc17fca0fe367346d5e4b61798c9054f4af190e4a8701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46fb6b4c2b5179db0acdc17fca0fe367346d5e4b61798c9054f4af190e4a8701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.278027 5110 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.331008 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.336149 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.336945 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.337968 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.339827 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.342074 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.343811 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.345065 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.346234 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.346816 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.347993 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.348770 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.350217 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.350819 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.352160 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.352626 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.353562 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.354693 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.355714 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.356978 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.357747 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.358570 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.360316 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.361422 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.362281 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.363369 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.364346 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.366129 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-6wqjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d78e4-c2f4-47e1-ae36-612264d9d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdlbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6wqjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.366201 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.366261 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.366250 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.366354 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.366375 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.366389 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:27Z","lastTransitionTime":"2026-01-26T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.367098 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.369501 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.370071 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.371089 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.372332 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.373752 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.376295 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.377499 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.378161 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.379024 5110 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.379499 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.382080 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.383406 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.384424 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.385540 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.386021 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.387198 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.387835 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.388316 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.389391 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.390251 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.391485 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.392178 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.393192 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.393806 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.394882 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.395710 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.397266 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.398020 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.406538 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f15bed73-d669-439f-9828-7b952d9bfe65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngjmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngjmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6tpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.407680 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.408597 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.468472 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.468785 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.468915 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.468986 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.469058 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:27Z","lastTransitionTime":"2026-01-26T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.572654 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.572733 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.572758 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.572792 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.572864 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:27Z","lastTransitionTime":"2026-01-26T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.675626 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.675934 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.676047 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.676203 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.676325 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:27Z","lastTransitionTime":"2026-01-26T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.679244 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-6wqjg" event={"ID":"cf1d78e4-c2f4-47e1-ae36-612264d9d70c","Type":"ContainerStarted","Data":"118e4b4adabf214986e8506b686f373d8b678c39e1065de30ca1e9e3371588af"} Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.680723 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-p427g" event={"ID":"b1bb7325-6bd2-4a72-aa8b-79cc2e3d821b","Type":"ContainerStarted","Data":"a24436833c02423b1dda4b693f275b810c510d5a4093be0cb61ac7e610000441"} Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.682319 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" event={"ID":"f15bed73-d669-439f-9828-7b952d9bfe65","Type":"ContainerStarted","Data":"8d4d72c2ca65a9357548a9f415c8b14ba57a625b2969c8a7e973bf9665af880d"} Jan 26 00:10:27 crc kubenswrapper[5110]: E0126 00:10:27.683007 5110 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:27 crc kubenswrapper[5110]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Jan 26 00:10:27 crc kubenswrapper[5110]: set -uo pipefail Jan 26 00:10:27 crc kubenswrapper[5110]: Jan 26 00:10:27 crc kubenswrapper[5110]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Jan 26 00:10:27 crc kubenswrapper[5110]: Jan 26 00:10:27 crc kubenswrapper[5110]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Jan 26 00:10:27 crc kubenswrapper[5110]: HOSTS_FILE="/etc/hosts" Jan 26 00:10:27 crc kubenswrapper[5110]: TEMP_FILE="/tmp/hosts.tmp" Jan 26 00:10:27 crc kubenswrapper[5110]: Jan 26 00:10:27 crc kubenswrapper[5110]: IFS=', ' read -r -a services <<< "${SERVICES}" Jan 26 00:10:27 crc kubenswrapper[5110]: Jan 26 00:10:27 crc kubenswrapper[5110]: # Make a temporary file with the old hosts file's attributes. Jan 26 00:10:27 crc kubenswrapper[5110]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Jan 26 00:10:27 crc kubenswrapper[5110]: echo "Failed to preserve hosts file. Exiting." Jan 26 00:10:27 crc kubenswrapper[5110]: exit 1 Jan 26 00:10:27 crc kubenswrapper[5110]: fi Jan 26 00:10:27 crc kubenswrapper[5110]: Jan 26 00:10:27 crc kubenswrapper[5110]: while true; do Jan 26 00:10:27 crc kubenswrapper[5110]: declare -A svc_ips Jan 26 00:10:27 crc kubenswrapper[5110]: for svc in "${services[@]}"; do Jan 26 00:10:27 crc kubenswrapper[5110]: # Fetch service IP from cluster dns if present. We make several tries Jan 26 00:10:27 crc kubenswrapper[5110]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Jan 26 00:10:27 crc kubenswrapper[5110]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Jan 26 00:10:27 crc kubenswrapper[5110]: # support UDP loadbalancers and require reaching DNS through TCP. Jan 26 00:10:27 crc kubenswrapper[5110]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 26 00:10:27 crc kubenswrapper[5110]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 26 00:10:27 crc kubenswrapper[5110]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 26 00:10:27 crc kubenswrapper[5110]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Jan 26 00:10:27 crc kubenswrapper[5110]: for i in ${!cmds[*]} Jan 26 00:10:27 crc kubenswrapper[5110]: do Jan 26 00:10:27 crc kubenswrapper[5110]: ips=($(eval "${cmds[i]}")) Jan 26 00:10:27 crc kubenswrapper[5110]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Jan 26 00:10:27 crc kubenswrapper[5110]: svc_ips["${svc}"]="${ips[@]}" Jan 26 00:10:27 crc kubenswrapper[5110]: break Jan 26 00:10:27 crc kubenswrapper[5110]: fi Jan 26 00:10:27 crc kubenswrapper[5110]: done Jan 26 00:10:27 crc kubenswrapper[5110]: done Jan 26 00:10:27 crc kubenswrapper[5110]: Jan 26 00:10:27 crc kubenswrapper[5110]: # Update /etc/hosts only if we get valid service IPs Jan 26 00:10:27 crc kubenswrapper[5110]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Jan 26 00:10:27 crc kubenswrapper[5110]: # Stale entries could exist in /etc/hosts if the service is deleted Jan 26 00:10:27 crc kubenswrapper[5110]: if [[ -n "${svc_ips[*]-}" ]]; then Jan 26 00:10:27 crc kubenswrapper[5110]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Jan 26 00:10:27 crc kubenswrapper[5110]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Jan 26 00:10:27 crc kubenswrapper[5110]: # Only continue rebuilding the hosts entries if its original content is preserved Jan 26 00:10:27 crc kubenswrapper[5110]: sleep 60 & wait Jan 26 00:10:27 crc kubenswrapper[5110]: continue Jan 26 00:10:27 crc kubenswrapper[5110]: fi Jan 26 00:10:27 crc kubenswrapper[5110]: Jan 26 00:10:27 crc kubenswrapper[5110]: # Append resolver entries for services Jan 26 00:10:27 crc kubenswrapper[5110]: rc=0 Jan 26 00:10:27 crc kubenswrapper[5110]: for svc in "${!svc_ips[@]}"; do Jan 26 00:10:27 crc kubenswrapper[5110]: for ip in ${svc_ips[${svc}]}; do Jan 26 00:10:27 crc kubenswrapper[5110]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Jan 26 00:10:27 crc kubenswrapper[5110]: done Jan 26 00:10:27 crc kubenswrapper[5110]: done Jan 26 00:10:27 crc kubenswrapper[5110]: if [[ $rc -ne 0 ]]; then Jan 26 00:10:27 crc kubenswrapper[5110]: sleep 60 & wait Jan 26 00:10:27 crc kubenswrapper[5110]: continue Jan 26 00:10:27 crc kubenswrapper[5110]: fi Jan 26 00:10:27 crc kubenswrapper[5110]: Jan 26 00:10:27 crc kubenswrapper[5110]: Jan 26 00:10:27 crc kubenswrapper[5110]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Jan 26 00:10:27 crc kubenswrapper[5110]: # Replace /etc/hosts with our modified version if needed Jan 26 00:10:27 crc kubenswrapper[5110]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Jan 26 00:10:27 crc kubenswrapper[5110]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Jan 26 00:10:27 crc kubenswrapper[5110]: fi Jan 26 00:10:27 crc kubenswrapper[5110]: sleep 60 & wait Jan 26 00:10:27 crc kubenswrapper[5110]: unset svc_ips Jan 26 00:10:27 crc kubenswrapper[5110]: done Jan 26 00:10:27 crc kubenswrapper[5110]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mdlbt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-6wqjg_openshift-dns(cf1d78e4-c2f4-47e1-ae36-612264d9d70c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:27 crc kubenswrapper[5110]: > logger="UnhandledError" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.683405 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-77v2r" event={"ID":"db003609-47aa-4a6a-a7ec-6dbc03ded29a","Type":"ContainerStarted","Data":"cb2e581aac84844bc270a8b77f7bc1bac2ba3853baa81e27aadc94311e5b1e36"} Jan 26 00:10:27 crc kubenswrapper[5110]: E0126 00:10:27.683481 5110 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:27 crc kubenswrapper[5110]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Jan 26 00:10:27 crc kubenswrapper[5110]: while [ true ]; Jan 26 00:10:27 crc kubenswrapper[5110]: do Jan 26 00:10:27 crc kubenswrapper[5110]: for f in $(ls /tmp/serviceca); do Jan 26 00:10:27 crc kubenswrapper[5110]: echo $f Jan 26 00:10:27 crc kubenswrapper[5110]: ca_file_path="/tmp/serviceca/${f}" Jan 26 00:10:27 crc kubenswrapper[5110]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Jan 26 00:10:27 crc kubenswrapper[5110]: reg_dir_path="/etc/docker/certs.d/${f}" Jan 26 00:10:27 crc kubenswrapper[5110]: if [ -e "${reg_dir_path}" ]; then Jan 26 00:10:27 crc kubenswrapper[5110]: cp -u $ca_file_path $reg_dir_path/ca.crt Jan 26 00:10:27 crc kubenswrapper[5110]: else Jan 26 00:10:27 crc kubenswrapper[5110]: mkdir $reg_dir_path Jan 26 00:10:27 crc kubenswrapper[5110]: cp $ca_file_path $reg_dir_path/ca.crt Jan 26 00:10:27 crc kubenswrapper[5110]: fi Jan 26 00:10:27 crc kubenswrapper[5110]: done Jan 26 00:10:27 crc kubenswrapper[5110]: for d in $(ls /etc/docker/certs.d); do Jan 26 00:10:27 crc kubenswrapper[5110]: echo $d Jan 26 00:10:27 crc kubenswrapper[5110]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Jan 26 00:10:27 crc kubenswrapper[5110]: reg_conf_path="/tmp/serviceca/${dp}" Jan 26 00:10:27 crc kubenswrapper[5110]: if [ ! -e "${reg_conf_path}" ]; then Jan 26 00:10:27 crc kubenswrapper[5110]: rm -rf /etc/docker/certs.d/$d Jan 26 00:10:27 crc kubenswrapper[5110]: fi Jan 26 00:10:27 crc kubenswrapper[5110]: done Jan 26 00:10:27 crc kubenswrapper[5110]: sleep 60 & wait ${!} Jan 26 00:10:27 crc kubenswrapper[5110]: done Jan 26 00:10:27 crc kubenswrapper[5110]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-48xmx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-p427g_openshift-image-registry(b1bb7325-6bd2-4a72-aa8b-79cc2e3d821b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:27 crc kubenswrapper[5110]: > logger="UnhandledError" Jan 26 00:10:27 crc kubenswrapper[5110]: E0126 00:10:27.684192 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-6wqjg" podUID="cf1d78e4-c2f4-47e1-ae36-612264d9d70c" Jan 26 00:10:27 crc kubenswrapper[5110]: E0126 00:10:27.684624 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-p427g" podUID="b1bb7325-6bd2-4a72-aa8b-79cc2e3d821b" Jan 26 00:10:27 crc kubenswrapper[5110]: E0126 00:10:27.685020 5110 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8qwlp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-77v2r_openshift-multus(db003609-47aa-4a6a-a7ec-6dbc03ded29a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.685451 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" event={"ID":"c2cba3eb-9a27-49a0-a3e6-645a8853c027","Type":"ContainerStarted","Data":"0846a745825adace2d108546cbd36763771fe83521c75441230842847d8e72c2"} Jan 26 00:10:27 crc kubenswrapper[5110]: E0126 00:10:27.686163 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-77v2r" podUID="db003609-47aa-4a6a-a7ec-6dbc03ded29a" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.686658 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jh4hk" event={"ID":"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec","Type":"ContainerStarted","Data":"49c88247be6ea34dc42d1e4c7fc65989be56f57adae77c233b18b736a73fd161"} Jan 26 00:10:27 crc kubenswrapper[5110]: E0126 00:10:27.687316 5110 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:27 crc kubenswrapper[5110]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Jan 26 00:10:27 crc kubenswrapper[5110]: apiVersion: v1 Jan 26 00:10:27 crc kubenswrapper[5110]: clusters: Jan 26 00:10:27 crc kubenswrapper[5110]: - cluster: Jan 26 00:10:27 crc kubenswrapper[5110]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Jan 26 00:10:27 crc kubenswrapper[5110]: server: https://api-int.crc.testing:6443 Jan 26 00:10:27 crc kubenswrapper[5110]: name: default-cluster Jan 26 00:10:27 crc kubenswrapper[5110]: contexts: Jan 26 00:10:27 crc kubenswrapper[5110]: - context: Jan 26 00:10:27 crc kubenswrapper[5110]: cluster: default-cluster Jan 26 00:10:27 crc kubenswrapper[5110]: namespace: default Jan 26 00:10:27 crc kubenswrapper[5110]: user: default-auth Jan 26 00:10:27 crc kubenswrapper[5110]: name: default-context Jan 26 00:10:27 crc kubenswrapper[5110]: current-context: default-context Jan 26 00:10:27 crc kubenswrapper[5110]: kind: Config Jan 26 00:10:27 crc kubenswrapper[5110]: preferences: {} Jan 26 00:10:27 crc kubenswrapper[5110]: users: Jan 26 00:10:27 crc kubenswrapper[5110]: - name: default-auth Jan 26 00:10:27 crc kubenswrapper[5110]: user: Jan 26 00:10:27 crc kubenswrapper[5110]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 26 00:10:27 crc kubenswrapper[5110]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 26 00:10:27 crc kubenswrapper[5110]: EOF Jan 26 00:10:27 crc kubenswrapper[5110]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6kp4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-bnkth_openshift-ovn-kubernetes(c2cba3eb-9a27-49a0-a3e6-645a8853c027): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:27 crc kubenswrapper[5110]: > logger="UnhandledError" Jan 26 00:10:27 crc kubenswrapper[5110]: E0126 00:10:27.687421 5110 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ngjmd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-c6tpr_openshift-machine-config-operator(f15bed73-d669-439f-9828-7b952d9bfe65): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 26 00:10:27 crc kubenswrapper[5110]: E0126 00:10:27.688566 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" podUID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" Jan 26 00:10:27 crc kubenswrapper[5110]: E0126 00:10:27.689183 5110 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:27 crc kubenswrapper[5110]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Jan 26 00:10:27 crc kubenswrapper[5110]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Jan 26 00:10:27 crc kubenswrapper[5110]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h6rnc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-jh4hk_openshift-multus(f2948d2b-fac7-4f3f-8b5f-f6f9c914daec): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:27 crc kubenswrapper[5110]: > logger="UnhandledError" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.689220 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"1864dd0fa4921a0f61ea01af6d7f74595fc9a73ac3f81925fb0aa44f6e64b2b7"} Jan 26 00:10:27 crc kubenswrapper[5110]: E0126 00:10:27.690271 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-jh4hk" podUID="f2948d2b-fac7-4f3f-8b5f-f6f9c914daec" Jan 26 00:10:27 crc kubenswrapper[5110]: E0126 00:10:27.691004 5110 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.691037 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-qgzzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f11dce8d-124f-497f-96a2-11dd1dddd26d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mlqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mlqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-qgzzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:27 crc kubenswrapper[5110]: E0126 00:10:27.691593 5110 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ngjmd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-c6tpr_openshift-machine-config-operator(f15bed73-d669-439f-9828-7b952d9bfe65): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 26 00:10:27 crc kubenswrapper[5110]: E0126 00:10:27.692122 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.692178 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-qgzzt" event={"ID":"f11dce8d-124f-497f-96a2-11dd1dddd26d","Type":"ContainerStarted","Data":"b828bb804ab6cf9a77bb51fb75c6134d1b6590e4afa009431ee2df876fd9e62c"} Jan 26 00:10:27 crc kubenswrapper[5110]: E0126 00:10:27.693288 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" podUID="f15bed73-d669-439f-9828-7b952d9bfe65" Jan 26 00:10:27 crc kubenswrapper[5110]: E0126 00:10:27.695169 5110 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:27 crc kubenswrapper[5110]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Jan 26 00:10:27 crc kubenswrapper[5110]: set -euo pipefail Jan 26 00:10:27 crc kubenswrapper[5110]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Jan 26 00:10:27 crc kubenswrapper[5110]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Jan 26 00:10:27 crc kubenswrapper[5110]: # As the secret mount is optional we must wait for the files to be present. Jan 26 00:10:27 crc kubenswrapper[5110]: # The service is created in monitor.yaml and this is created in sdn.yaml. Jan 26 00:10:27 crc kubenswrapper[5110]: TS=$(date +%s) Jan 26 00:10:27 crc kubenswrapper[5110]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Jan 26 00:10:27 crc kubenswrapper[5110]: HAS_LOGGED_INFO=0 Jan 26 00:10:27 crc kubenswrapper[5110]: Jan 26 00:10:27 crc kubenswrapper[5110]: log_missing_certs(){ Jan 26 00:10:27 crc kubenswrapper[5110]: CUR_TS=$(date +%s) Jan 26 00:10:27 crc kubenswrapper[5110]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Jan 26 00:10:27 crc kubenswrapper[5110]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Jan 26 00:10:27 crc kubenswrapper[5110]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Jan 26 00:10:27 crc kubenswrapper[5110]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Jan 26 00:10:27 crc kubenswrapper[5110]: HAS_LOGGED_INFO=1 Jan 26 00:10:27 crc kubenswrapper[5110]: fi Jan 26 00:10:27 crc kubenswrapper[5110]: } Jan 26 00:10:27 crc kubenswrapper[5110]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Jan 26 00:10:27 crc kubenswrapper[5110]: log_missing_certs Jan 26 00:10:27 crc kubenswrapper[5110]: sleep 5 Jan 26 00:10:27 crc kubenswrapper[5110]: done Jan 26 00:10:27 crc kubenswrapper[5110]: Jan 26 00:10:27 crc kubenswrapper[5110]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Jan 26 00:10:27 crc kubenswrapper[5110]: exec /usr/bin/kube-rbac-proxy \ Jan 26 00:10:27 crc kubenswrapper[5110]: --logtostderr \ Jan 26 00:10:27 crc kubenswrapper[5110]: --secure-listen-address=:9108 \ Jan 26 00:10:27 crc kubenswrapper[5110]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Jan 26 00:10:27 crc kubenswrapper[5110]: --upstream=http://127.0.0.1:29108/ \ Jan 26 00:10:27 crc kubenswrapper[5110]: --tls-private-key-file=${TLS_PK} \ Jan 26 00:10:27 crc kubenswrapper[5110]: --tls-cert-file=${TLS_CERT} Jan 26 00:10:27 crc kubenswrapper[5110]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4mlqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-qgzzt_openshift-ovn-kubernetes(f11dce8d-124f-497f-96a2-11dd1dddd26d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:27 crc kubenswrapper[5110]: > logger="UnhandledError" Jan 26 00:10:27 crc kubenswrapper[5110]: E0126 00:10:27.698421 5110 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:27 crc kubenswrapper[5110]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 26 00:10:27 crc kubenswrapper[5110]: if [[ -f "/env/_master" ]]; then Jan 26 00:10:27 crc kubenswrapper[5110]: set -o allexport Jan 26 00:10:27 crc kubenswrapper[5110]: source "/env/_master" Jan 26 00:10:27 crc kubenswrapper[5110]: set +o allexport Jan 26 00:10:27 crc kubenswrapper[5110]: fi Jan 26 00:10:27 crc kubenswrapper[5110]: Jan 26 00:10:27 crc kubenswrapper[5110]: ovn_v4_join_subnet_opt= Jan 26 00:10:27 crc kubenswrapper[5110]: if [[ "" != "" ]]; then Jan 26 00:10:27 crc kubenswrapper[5110]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Jan 26 00:10:27 crc kubenswrapper[5110]: fi Jan 26 00:10:27 crc kubenswrapper[5110]: ovn_v6_join_subnet_opt= Jan 26 00:10:27 crc kubenswrapper[5110]: if [[ "" != "" ]]; then Jan 26 00:10:27 crc kubenswrapper[5110]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Jan 26 00:10:27 crc kubenswrapper[5110]: fi Jan 26 00:10:27 crc kubenswrapper[5110]: Jan 26 00:10:27 crc kubenswrapper[5110]: ovn_v4_transit_switch_subnet_opt= Jan 26 00:10:27 crc kubenswrapper[5110]: if [[ "" != "" ]]; then Jan 26 00:10:27 crc kubenswrapper[5110]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Jan 26 00:10:27 crc kubenswrapper[5110]: fi Jan 26 00:10:27 crc kubenswrapper[5110]: ovn_v6_transit_switch_subnet_opt= Jan 26 00:10:27 crc kubenswrapper[5110]: if [[ "" != "" ]]; then Jan 26 00:10:27 crc kubenswrapper[5110]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Jan 26 00:10:27 crc kubenswrapper[5110]: fi Jan 26 00:10:27 crc kubenswrapper[5110]: Jan 26 00:10:27 crc kubenswrapper[5110]: dns_name_resolver_enabled_flag= Jan 26 00:10:27 crc kubenswrapper[5110]: if [[ "false" == "true" ]]; then Jan 26 00:10:27 crc kubenswrapper[5110]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Jan 26 00:10:27 crc kubenswrapper[5110]: fi Jan 26 00:10:27 crc kubenswrapper[5110]: Jan 26 00:10:27 crc kubenswrapper[5110]: persistent_ips_enabled_flag="--enable-persistent-ips" Jan 26 00:10:27 crc kubenswrapper[5110]: Jan 26 00:10:27 crc kubenswrapper[5110]: # This is needed so that converting clusters from GA to TP Jan 26 00:10:27 crc kubenswrapper[5110]: # will rollout control plane pods as well Jan 26 00:10:27 crc kubenswrapper[5110]: network_segmentation_enabled_flag= Jan 26 00:10:27 crc kubenswrapper[5110]: multi_network_enabled_flag= Jan 26 00:10:27 crc kubenswrapper[5110]: if [[ "true" == "true" ]]; then Jan 26 00:10:27 crc kubenswrapper[5110]: multi_network_enabled_flag="--enable-multi-network" Jan 26 00:10:27 crc kubenswrapper[5110]: fi Jan 26 00:10:27 crc kubenswrapper[5110]: if [[ "true" == "true" ]]; then Jan 26 00:10:27 crc kubenswrapper[5110]: if [[ "true" != "true" ]]; then Jan 26 00:10:27 crc kubenswrapper[5110]: multi_network_enabled_flag="--enable-multi-network" Jan 26 00:10:27 crc kubenswrapper[5110]: fi Jan 26 00:10:27 crc kubenswrapper[5110]: network_segmentation_enabled_flag="--enable-network-segmentation" Jan 26 00:10:27 crc kubenswrapper[5110]: fi Jan 26 00:10:27 crc kubenswrapper[5110]: Jan 26 00:10:27 crc kubenswrapper[5110]: route_advertisements_enable_flag= Jan 26 00:10:27 crc kubenswrapper[5110]: if [[ "false" == "true" ]]; then Jan 26 00:10:27 crc kubenswrapper[5110]: route_advertisements_enable_flag="--enable-route-advertisements" Jan 26 00:10:27 crc kubenswrapper[5110]: fi Jan 26 00:10:27 crc kubenswrapper[5110]: Jan 26 00:10:27 crc kubenswrapper[5110]: preconfigured_udn_addresses_enable_flag= Jan 26 00:10:27 crc kubenswrapper[5110]: if [[ "false" == "true" ]]; then Jan 26 00:10:27 crc kubenswrapper[5110]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Jan 26 00:10:27 crc kubenswrapper[5110]: fi Jan 26 00:10:27 crc kubenswrapper[5110]: Jan 26 00:10:27 crc kubenswrapper[5110]: # Enable multi-network policy if configured (control-plane always full mode) Jan 26 00:10:27 crc kubenswrapper[5110]: multi_network_policy_enabled_flag= Jan 26 00:10:27 crc kubenswrapper[5110]: if [[ "false" == "true" ]]; then Jan 26 00:10:27 crc kubenswrapper[5110]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Jan 26 00:10:27 crc kubenswrapper[5110]: fi Jan 26 00:10:27 crc kubenswrapper[5110]: Jan 26 00:10:27 crc kubenswrapper[5110]: # Enable admin network policy if configured (control-plane always full mode) Jan 26 00:10:27 crc kubenswrapper[5110]: admin_network_policy_enabled_flag= Jan 26 00:10:27 crc kubenswrapper[5110]: if [[ "true" == "true" ]]; then Jan 26 00:10:27 crc kubenswrapper[5110]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Jan 26 00:10:27 crc kubenswrapper[5110]: fi Jan 26 00:10:27 crc kubenswrapper[5110]: Jan 26 00:10:27 crc kubenswrapper[5110]: if [ "shared" == "shared" ]; then Jan 26 00:10:27 crc kubenswrapper[5110]: gateway_mode_flags="--gateway-mode shared" Jan 26 00:10:27 crc kubenswrapper[5110]: elif [ "shared" == "local" ]; then Jan 26 00:10:27 crc kubenswrapper[5110]: gateway_mode_flags="--gateway-mode local" Jan 26 00:10:27 crc kubenswrapper[5110]: else Jan 26 00:10:27 crc kubenswrapper[5110]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Jan 26 00:10:27 crc kubenswrapper[5110]: exit 1 Jan 26 00:10:27 crc kubenswrapper[5110]: fi Jan 26 00:10:27 crc kubenswrapper[5110]: Jan 26 00:10:27 crc kubenswrapper[5110]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Jan 26 00:10:27 crc kubenswrapper[5110]: exec /usr/bin/ovnkube \ Jan 26 00:10:27 crc kubenswrapper[5110]: --enable-interconnect \ Jan 26 00:10:27 crc kubenswrapper[5110]: --init-cluster-manager "${K8S_NODE}" \ Jan 26 00:10:27 crc kubenswrapper[5110]: --config-file=/run/ovnkube-config/ovnkube.conf \ Jan 26 00:10:27 crc kubenswrapper[5110]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Jan 26 00:10:27 crc kubenswrapper[5110]: --metrics-bind-address "127.0.0.1:29108" \ Jan 26 00:10:27 crc kubenswrapper[5110]: --metrics-enable-pprof \ Jan 26 00:10:27 crc kubenswrapper[5110]: --metrics-enable-config-duration \ Jan 26 00:10:27 crc kubenswrapper[5110]: ${ovn_v4_join_subnet_opt} \ Jan 26 00:10:27 crc kubenswrapper[5110]: ${ovn_v6_join_subnet_opt} \ Jan 26 00:10:27 crc kubenswrapper[5110]: ${ovn_v4_transit_switch_subnet_opt} \ Jan 26 00:10:27 crc kubenswrapper[5110]: ${ovn_v6_transit_switch_subnet_opt} \ Jan 26 00:10:27 crc kubenswrapper[5110]: ${dns_name_resolver_enabled_flag} \ Jan 26 00:10:27 crc kubenswrapper[5110]: ${persistent_ips_enabled_flag} \ Jan 26 00:10:27 crc kubenswrapper[5110]: ${multi_network_enabled_flag} \ Jan 26 00:10:27 crc kubenswrapper[5110]: ${network_segmentation_enabled_flag} \ Jan 26 00:10:27 crc kubenswrapper[5110]: ${gateway_mode_flags} \ Jan 26 00:10:27 crc kubenswrapper[5110]: ${route_advertisements_enable_flag} \ Jan 26 00:10:27 crc kubenswrapper[5110]: ${preconfigured_udn_addresses_enable_flag} \ Jan 26 00:10:27 crc kubenswrapper[5110]: --enable-egress-ip=true \ Jan 26 00:10:27 crc kubenswrapper[5110]: --enable-egress-firewall=true \ Jan 26 00:10:27 crc kubenswrapper[5110]: --enable-egress-qos=true \ Jan 26 00:10:27 crc kubenswrapper[5110]: --enable-egress-service=true \ Jan 26 00:10:27 crc kubenswrapper[5110]: --enable-multicast \ Jan 26 00:10:27 crc kubenswrapper[5110]: --enable-multi-external-gateway=true \ Jan 26 00:10:27 crc kubenswrapper[5110]: ${multi_network_policy_enabled_flag} \ Jan 26 00:10:27 crc kubenswrapper[5110]: ${admin_network_policy_enabled_flag} Jan 26 00:10:27 crc kubenswrapper[5110]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4mlqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-qgzzt_openshift-ovn-kubernetes(f11dce8d-124f-497f-96a2-11dd1dddd26d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:27 crc kubenswrapper[5110]: > logger="UnhandledError" Jan 26 00:10:27 crc kubenswrapper[5110]: E0126 00:10:27.700468 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-qgzzt" podUID="f11dce8d-124f-497f-96a2-11dd1dddd26d" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.701558 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8ndzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"040d3d5f-c02a-4a70-92af-70700fd9e3c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zsxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zsxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8ndzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.725027 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd7963f-3ea7-4581-8c42-0bd2de1e5540\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://7e90c7cb0f7f52fbb22c7c05dda374c6044ece3a6eb021c481437ea8f5de1298\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://2277f081ca24900f20ca43d5a74890c20bee6e3ec8ad9e4309f66cf96678660e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5241883a28ef3e6af36e6b5a5cca557f4bdb4444cdb5ba3cd00b1e9f28243d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f0a980f89ffe1688954fbe60c16faeaf282be70e3d5a3968f74311a125158488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://62c4e498d0857df9b3c9615022c95ca7ce1497c476eb255d3275604912cf63f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9556ace6d728cde3159c1c8e97e12787903a3279cba1610d592fe244eff5ae5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9556ace6d728cde3159c1c8e97e12787903a3279cba1610d592fe244eff5ae5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8339480b8ee693e38bc6660f0b37482ff26d091553c39db9cf75f4f8b95f96fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8339480b8ee693e38bc6660f0b37482ff26d091553c39db9cf75f4f8b95f96fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://fb3dc595d8f46c045ba87b7515cb2d4576bb2237750740db45c461f69d392c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb3dc595d8f46c045ba87b7515cb2d4576bb2237750740db45c461f69d392c7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.737301 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dafd472-3344-4a8b-a4ef-5242709c94d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://86dfaec2435d8c668da9a0ae44fdc2a617089e161cd08dcdf2559f662c8f2b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c2eb152f744e6c7d629e5eb8590f1c2d570d8f370e9deede5c4436d336c754bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3734fa9576160d905d72573d1984bf03448ada8b30a78ff574ce89cb894632d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://46fb6b4c2b5179db0acdc17fca0fe367346d5e4b61798c9054f4af190e4a8701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46fb6b4c2b5179db0acdc17fca0fe367346d5e4b61798c9054f4af190e4a8701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.750293 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.758893 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-6wqjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d78e4-c2f4-47e1-ae36-612264d9d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdlbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6wqjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.768616 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f15bed73-d669-439f-9828-7b952d9bfe65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngjmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngjmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6tpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.778466 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.778774 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.778844 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.778892 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.778923 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:27Z","lastTransitionTime":"2026-01-26T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.779361 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.786968 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p427g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1bb7325-6bd2-4a72-aa8b-79cc2e3d821b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-48xmx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p427g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.808908 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2cba3eb-9a27-49a0-a3e6-645a8853c027\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnkth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.844395 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-jh4hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6rnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jh4hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.881234 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.881303 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.881318 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.881341 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.881362 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:27Z","lastTransitionTime":"2026-01-26T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.888353 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf35b8e-a6c5-479a-860f-db0308fb993b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://a67e63f2b2d217c10ec50b836c9cca85c134aef230c27dbb1ae8139a6dddd72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://edc58dbb97e7a54abf71095739d515edea8205bf2c5e4008b22a69d40b83cb2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://44222dc6a7066064989d8823f6c0f556f4d4d8cac2f2b4ab1462cbb51db8ff07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3202209affc7d7c4869fa3cbbd7b3a166dee850e7789b407826358743c13e508\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3202209affc7d7c4869fa3cbbd7b3a166dee850e7789b407826358743c13e508\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:10:09Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW0126 00:10:09.419281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 00:10:09.419514 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0126 00:10:09.420601 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1392405372/tls.crt::/tmp/serving-cert-1392405372/tls.key\\\\\\\"\\\\nI0126 00:10:09.740266 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:10:09.742210 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:10:09.742231 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:10:09.742265 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:10:09.742270 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:10:09.746497 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:10:09.746538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:09.746544 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:09.746549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:10:09.746551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:10:09.746556 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:10:09.746559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 00:10:09.746563 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 00:10:09.749601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:10:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fc0c2347169345d3886bfc369c472fa0e2cf381c3b596a46b7de48d84e955504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2fe6116a5b9da02a5b57aaf57f7bed21247934453d936503dd835d1eac1c23b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe6116a5b9da02a5b57aaf57f7bed21247934453d936503dd835d1eac1c23b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.925714 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.946972 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.947057 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:27 crc kubenswrapper[5110]: E0126 00:10:27.947170 5110 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:27 crc kubenswrapper[5110]: E0126 00:10:27.947205 5110 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:27 crc kubenswrapper[5110]: E0126 00:10:27.947259 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:29.947231634 +0000 UTC m=+87.176130253 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.947355 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:27 crc kubenswrapper[5110]: E0126 00:10:27.947418 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:27 crc kubenswrapper[5110]: E0126 00:10:27.947435 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:27 crc kubenswrapper[5110]: E0126 00:10:27.947450 5110 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:27 crc kubenswrapper[5110]: E0126 00:10:27.947459 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:29.947413659 +0000 UTC m=+87.176312268 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:27 crc kubenswrapper[5110]: E0126 00:10:27.947520 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:29.947511532 +0000 UTC m=+87.176410141 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.947584 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:27 crc kubenswrapper[5110]: E0126 00:10:27.947761 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:27 crc kubenswrapper[5110]: E0126 00:10:27.947816 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:27 crc kubenswrapper[5110]: E0126 00:10:27.947831 5110 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:27 crc kubenswrapper[5110]: E0126 00:10:27.947925 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:29.947902733 +0000 UTC m=+87.176801342 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.965779 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.984260 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.984316 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.984331 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.984361 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:27 crc kubenswrapper[5110]: I0126 00:10:27.984376 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:27Z","lastTransitionTime":"2026-01-26T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.003721 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecd0515-69bc-4e35-9cac-3edd40468f1f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://30ae138b7d197dd54752f6c812a2ea9e7f23f015060d7f1c186abfe872d810a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e8cd7582659a048bc238ce9753c82ac7555698b76159d5cf809cfd7eddfa8d7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8cd7582659a048bc238ce9753c82ac7555698b76159d5cf809cfd7eddfa8d7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.050386 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7e97ec-d5be-4c28-bde0-55be95e4d947\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4c0ec4f07f5bc1820f2ebd1ad017e35f029e5b8166971c3676b9f348147456cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://201fbacf5107a565f88c0b909a2d52c716cda4ea473eb26d2c613fe248ecf434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://74a99588daa785243f933c28a11ac41cbdab81eb6fa291c097f633a57d2e90df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4c060a49b2c9f19914a87af1d9e1860fcf614171795c22fcd1fb629d84d3df5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.087521 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.087593 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.087614 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.087644 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.087666 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:28Z","lastTransitionTime":"2026-01-26T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.088344 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.126078 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.150017 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:28 crc kubenswrapper[5110]: E0126 00:10:28.150298 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:30.150255519 +0000 UTC m=+87.379154128 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.168391 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77v2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db003609-47aa-4a6a-a7ec-6dbc03ded29a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77v2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.190095 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.190160 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.190174 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.190196 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.190212 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:28Z","lastTransitionTime":"2026-01-26T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.202607 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecd0515-69bc-4e35-9cac-3edd40468f1f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://30ae138b7d197dd54752f6c812a2ea9e7f23f015060d7f1c186abfe872d810a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e8cd7582659a048bc238ce9753c82ac7555698b76159d5cf809cfd7eddfa8d7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8cd7582659a048bc238ce9753c82ac7555698b76159d5cf809cfd7eddfa8d7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.246343 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7e97ec-d5be-4c28-bde0-55be95e4d947\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4c0ec4f07f5bc1820f2ebd1ad017e35f029e5b8166971c3676b9f348147456cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://201fbacf5107a565f88c0b909a2d52c716cda4ea473eb26d2c613fe248ecf434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://74a99588daa785243f933c28a11ac41cbdab81eb6fa291c097f633a57d2e90df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4c060a49b2c9f19914a87af1d9e1860fcf614171795c22fcd1fb629d84d3df5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.252079 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/040d3d5f-c02a-4a70-92af-70700fd9e3c3-metrics-certs\") pod \"network-metrics-daemon-8ndzr\" (UID: \"040d3d5f-c02a-4a70-92af-70700fd9e3c3\") " pod="openshift-multus/network-metrics-daemon-8ndzr" Jan 26 00:10:28 crc kubenswrapper[5110]: E0126 00:10:28.252357 5110 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:10:28 crc kubenswrapper[5110]: E0126 00:10:28.252508 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/040d3d5f-c02a-4a70-92af-70700fd9e3c3-metrics-certs podName:040d3d5f-c02a-4a70-92af-70700fd9e3c3 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:30.252474997 +0000 UTC m=+87.481373616 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/040d3d5f-c02a-4a70-92af-70700fd9e3c3-metrics-certs") pod "network-metrics-daemon-8ndzr" (UID: "040d3d5f-c02a-4a70-92af-70700fd9e3c3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.284252 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.292773 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.292845 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.292862 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.292884 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.292897 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:28Z","lastTransitionTime":"2026-01-26T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.316371 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.316475 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8ndzr" Jan 26 00:10:28 crc kubenswrapper[5110]: E0126 00:10:28.316599 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.316658 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.316964 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:28 crc kubenswrapper[5110]: E0126 00:10:28.316954 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8ndzr" podUID="040d3d5f-c02a-4a70-92af-70700fd9e3c3" Jan 26 00:10:28 crc kubenswrapper[5110]: E0126 00:10:28.317130 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:28 crc kubenswrapper[5110]: E0126 00:10:28.317317 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.326693 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.367047 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77v2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db003609-47aa-4a6a-a7ec-6dbc03ded29a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77v2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.395332 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.395388 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.395398 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.395418 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.395430 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:28Z","lastTransitionTime":"2026-01-26T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.405338 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-qgzzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f11dce8d-124f-497f-96a2-11dd1dddd26d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mlqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mlqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-qgzzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.445621 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8ndzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"040d3d5f-c02a-4a70-92af-70700fd9e3c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zsxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zsxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8ndzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.495113 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd7963f-3ea7-4581-8c42-0bd2de1e5540\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://7e90c7cb0f7f52fbb22c7c05dda374c6044ece3a6eb021c481437ea8f5de1298\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://2277f081ca24900f20ca43d5a74890c20bee6e3ec8ad9e4309f66cf96678660e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5241883a28ef3e6af36e6b5a5cca557f4bdb4444cdb5ba3cd00b1e9f28243d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f0a980f89ffe1688954fbe60c16faeaf282be70e3d5a3968f74311a125158488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://62c4e498d0857df9b3c9615022c95ca7ce1497c476eb255d3275604912cf63f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9556ace6d728cde3159c1c8e97e12787903a3279cba1610d592fe244eff5ae5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9556ace6d728cde3159c1c8e97e12787903a3279cba1610d592fe244eff5ae5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8339480b8ee693e38bc6660f0b37482ff26d091553c39db9cf75f4f8b95f96fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8339480b8ee693e38bc6660f0b37482ff26d091553c39db9cf75f4f8b95f96fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://fb3dc595d8f46c045ba87b7515cb2d4576bb2237750740db45c461f69d392c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb3dc595d8f46c045ba87b7515cb2d4576bb2237750740db45c461f69d392c7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.497988 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.498066 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.498083 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.498103 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.498117 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:28Z","lastTransitionTime":"2026-01-26T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.525213 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dafd472-3344-4a8b-a4ef-5242709c94d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://86dfaec2435d8c668da9a0ae44fdc2a617089e161cd08dcdf2559f662c8f2b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c2eb152f744e6c7d629e5eb8590f1c2d570d8f370e9deede5c4436d336c754bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3734fa9576160d905d72573d1984bf03448ada8b30a78ff574ce89cb894632d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://46fb6b4c2b5179db0acdc17fca0fe367346d5e4b61798c9054f4af190e4a8701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46fb6b4c2b5179db0acdc17fca0fe367346d5e4b61798c9054f4af190e4a8701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.565093 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.601590 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.601666 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.601682 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.601707 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.601723 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:28Z","lastTransitionTime":"2026-01-26T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.603767 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-6wqjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d78e4-c2f4-47e1-ae36-612264d9d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdlbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6wqjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.645671 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f15bed73-d669-439f-9828-7b952d9bfe65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngjmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngjmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6tpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.686213 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.703535 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.703573 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.703587 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.703607 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.703621 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:28Z","lastTransitionTime":"2026-01-26T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.725267 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p427g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1bb7325-6bd2-4a72-aa8b-79cc2e3d821b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-48xmx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p427g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.770077 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2cba3eb-9a27-49a0-a3e6-645a8853c027\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnkth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.806483 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.806549 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.806564 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.806586 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.806599 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:28Z","lastTransitionTime":"2026-01-26T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.807610 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-jh4hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6rnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jh4hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.846166 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf35b8e-a6c5-479a-860f-db0308fb993b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://a67e63f2b2d217c10ec50b836c9cca85c134aef230c27dbb1ae8139a6dddd72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://edc58dbb97e7a54abf71095739d515edea8205bf2c5e4008b22a69d40b83cb2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://44222dc6a7066064989d8823f6c0f556f4d4d8cac2f2b4ab1462cbb51db8ff07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3202209affc7d7c4869fa3cbbd7b3a166dee850e7789b407826358743c13e508\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3202209affc7d7c4869fa3cbbd7b3a166dee850e7789b407826358743c13e508\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:10:09Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW0126 00:10:09.419281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 00:10:09.419514 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0126 00:10:09.420601 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1392405372/tls.crt::/tmp/serving-cert-1392405372/tls.key\\\\\\\"\\\\nI0126 00:10:09.740266 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:10:09.742210 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:10:09.742231 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:10:09.742265 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:10:09.742270 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:10:09.746497 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:10:09.746538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:09.746544 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:09.746549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:10:09.746551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:10:09.746556 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:10:09.746559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 00:10:09.746563 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 00:10:09.749601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:10:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fc0c2347169345d3886bfc369c472fa0e2cf381c3b596a46b7de48d84e955504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2fe6116a5b9da02a5b57aaf57f7bed21247934453d936503dd835d1eac1c23b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe6116a5b9da02a5b57aaf57f7bed21247934453d936503dd835d1eac1c23b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.886737 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.909292 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.909363 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.909375 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.909398 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.909412 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:28Z","lastTransitionTime":"2026-01-26T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:28 crc kubenswrapper[5110]: I0126 00:10:28.926051 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.012002 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.012098 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.012113 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.012135 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.012153 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:29Z","lastTransitionTime":"2026-01-26T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.115045 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.115111 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.115125 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.115176 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.115192 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:29Z","lastTransitionTime":"2026-01-26T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.217841 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.217937 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.217964 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.217997 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.218029 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:29Z","lastTransitionTime":"2026-01-26T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.320592 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.320654 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.320669 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.320691 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.320705 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:29Z","lastTransitionTime":"2026-01-26T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.423323 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.423391 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.423408 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.423430 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.423446 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:29Z","lastTransitionTime":"2026-01-26T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.526011 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.526074 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.526089 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.526105 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.526116 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:29Z","lastTransitionTime":"2026-01-26T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.628950 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.629005 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.629018 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.629034 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.629046 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:29Z","lastTransitionTime":"2026-01-26T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.732039 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.732117 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.732137 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.732172 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.732308 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:29Z","lastTransitionTime":"2026-01-26T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.835056 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.835134 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.835153 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.835180 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.835213 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:29Z","lastTransitionTime":"2026-01-26T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.937564 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.937635 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.937648 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.937671 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.937685 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:29Z","lastTransitionTime":"2026-01-26T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.973424 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.973476 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.973504 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:29 crc kubenswrapper[5110]: I0126 00:10:29.973526 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:29 crc kubenswrapper[5110]: E0126 00:10:29.973639 5110 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:29 crc kubenswrapper[5110]: E0126 00:10:29.973680 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:29 crc kubenswrapper[5110]: E0126 00:10:29.973697 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:29 crc kubenswrapper[5110]: E0126 00:10:29.973703 5110 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:29 crc kubenswrapper[5110]: E0126 00:10:29.973710 5110 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:29 crc kubenswrapper[5110]: E0126 00:10:29.973776 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:33.973742014 +0000 UTC m=+91.202640643 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:29 crc kubenswrapper[5110]: E0126 00:10:29.973841 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:33.973825866 +0000 UTC m=+91.202724495 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:29 crc kubenswrapper[5110]: E0126 00:10:29.973868 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:33.973853227 +0000 UTC m=+91.202751856 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:29 crc kubenswrapper[5110]: E0126 00:10:29.973895 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:29 crc kubenswrapper[5110]: E0126 00:10:29.973909 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:29 crc kubenswrapper[5110]: E0126 00:10:29.973918 5110 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:29 crc kubenswrapper[5110]: E0126 00:10:29.973966 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:33.97395343 +0000 UTC m=+91.202852039 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.040464 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.040526 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.040549 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.040574 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.040590 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:30Z","lastTransitionTime":"2026-01-26T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.143408 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.143462 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.143474 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.143491 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.143503 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:30Z","lastTransitionTime":"2026-01-26T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.175066 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:30 crc kubenswrapper[5110]: E0126 00:10:30.175170 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:34.175146134 +0000 UTC m=+91.404044743 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.246603 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.246666 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.246676 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.246696 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.246710 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:30Z","lastTransitionTime":"2026-01-26T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.277053 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/040d3d5f-c02a-4a70-92af-70700fd9e3c3-metrics-certs\") pod \"network-metrics-daemon-8ndzr\" (UID: \"040d3d5f-c02a-4a70-92af-70700fd9e3c3\") " pod="openshift-multus/network-metrics-daemon-8ndzr" Jan 26 00:10:30 crc kubenswrapper[5110]: E0126 00:10:30.277315 5110 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:10:30 crc kubenswrapper[5110]: E0126 00:10:30.277413 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/040d3d5f-c02a-4a70-92af-70700fd9e3c3-metrics-certs podName:040d3d5f-c02a-4a70-92af-70700fd9e3c3 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:34.277388422 +0000 UTC m=+91.506287031 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/040d3d5f-c02a-4a70-92af-70700fd9e3c3-metrics-certs") pod "network-metrics-daemon-8ndzr" (UID: "040d3d5f-c02a-4a70-92af-70700fd9e3c3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.316411 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.316472 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8ndzr" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.316411 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:30 crc kubenswrapper[5110]: E0126 00:10:30.316635 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:30 crc kubenswrapper[5110]: E0126 00:10:30.316759 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.316872 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:30 crc kubenswrapper[5110]: E0126 00:10:30.316959 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8ndzr" podUID="040d3d5f-c02a-4a70-92af-70700fd9e3c3" Jan 26 00:10:30 crc kubenswrapper[5110]: E0126 00:10:30.317047 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.349871 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.349947 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.349965 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.349993 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.350012 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:30Z","lastTransitionTime":"2026-01-26T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.452445 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.452575 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.452595 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.452626 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.452656 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:30Z","lastTransitionTime":"2026-01-26T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.555687 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.555758 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.555774 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.555819 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.555836 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:30Z","lastTransitionTime":"2026-01-26T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.659166 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.659230 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.659245 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.659273 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.659286 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:30Z","lastTransitionTime":"2026-01-26T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.761724 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.761784 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.761821 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.761940 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.762092 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:30Z","lastTransitionTime":"2026-01-26T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.865591 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.865656 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.865670 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.865692 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.865707 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:30Z","lastTransitionTime":"2026-01-26T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.968033 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.968081 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.968093 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.968110 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:30 crc kubenswrapper[5110]: I0126 00:10:30.968122 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:30Z","lastTransitionTime":"2026-01-26T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.071025 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.071125 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.071146 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.071176 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.071226 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:31Z","lastTransitionTime":"2026-01-26T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.174266 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.174333 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.174344 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.174360 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.174373 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:31Z","lastTransitionTime":"2026-01-26T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.277014 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.277079 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.277091 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.277112 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.277126 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:31Z","lastTransitionTime":"2026-01-26T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.379368 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.379440 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.379453 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.379477 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.379493 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:31Z","lastTransitionTime":"2026-01-26T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.481884 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.481948 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.481959 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.481981 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.481994 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:31Z","lastTransitionTime":"2026-01-26T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.584338 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.584400 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.584414 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.584435 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.584449 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:31Z","lastTransitionTime":"2026-01-26T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.687574 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.687623 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.687632 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.687652 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.687662 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:31Z","lastTransitionTime":"2026-01-26T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.705816 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.705870 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.705885 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.705904 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.705918 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:31Z","lastTransitionTime":"2026-01-26T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:31 crc kubenswrapper[5110]: E0126 00:10:31.716996 5110 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"649ae373-4a50-43e7-bb88-2a6949129be7\\\",\\\"systemUUID\\\":\\\"8b97abb7-24be-4a3b-9f16-cd27402370ca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.720428 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.720469 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.720481 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.720499 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.720512 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:31Z","lastTransitionTime":"2026-01-26T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:31 crc kubenswrapper[5110]: E0126 00:10:31.730112 5110 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"649ae373-4a50-43e7-bb88-2a6949129be7\\\",\\\"systemUUID\\\":\\\"8b97abb7-24be-4a3b-9f16-cd27402370ca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.733725 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.733774 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.733811 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.733832 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.733846 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:31Z","lastTransitionTime":"2026-01-26T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:31 crc kubenswrapper[5110]: E0126 00:10:31.743211 5110 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"649ae373-4a50-43e7-bb88-2a6949129be7\\\",\\\"systemUUID\\\":\\\"8b97abb7-24be-4a3b-9f16-cd27402370ca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.746661 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.746702 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.746714 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.746729 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.746741 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:31Z","lastTransitionTime":"2026-01-26T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:31 crc kubenswrapper[5110]: E0126 00:10:31.755251 5110 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"649ae373-4a50-43e7-bb88-2a6949129be7\\\",\\\"systemUUID\\\":\\\"8b97abb7-24be-4a3b-9f16-cd27402370ca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.758369 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.758400 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.758411 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.758428 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.758439 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:31Z","lastTransitionTime":"2026-01-26T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:31 crc kubenswrapper[5110]: E0126 00:10:31.767373 5110 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"649ae373-4a50-43e7-bb88-2a6949129be7\\\",\\\"systemUUID\\\":\\\"8b97abb7-24be-4a3b-9f16-cd27402370ca\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:31 crc kubenswrapper[5110]: E0126 00:10:31.767562 5110 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.789603 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.789641 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.789653 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.789676 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.789689 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:31Z","lastTransitionTime":"2026-01-26T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.892326 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.892437 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.892482 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.892521 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.892545 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:31Z","lastTransitionTime":"2026-01-26T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.994770 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.994871 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.994885 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.994904 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:31 crc kubenswrapper[5110]: I0126 00:10:31.994917 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:31Z","lastTransitionTime":"2026-01-26T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.098271 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.098328 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.098338 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.098359 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.098372 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:32Z","lastTransitionTime":"2026-01-26T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.201356 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.201419 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.201433 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.201456 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.201473 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:32Z","lastTransitionTime":"2026-01-26T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.303616 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.303675 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.303688 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.303710 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.303722 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:32Z","lastTransitionTime":"2026-01-26T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.317056 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.317121 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.317150 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:32 crc kubenswrapper[5110]: E0126 00:10:32.317295 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:32 crc kubenswrapper[5110]: E0126 00:10:32.317427 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.317477 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8ndzr" Jan 26 00:10:32 crc kubenswrapper[5110]: E0126 00:10:32.317548 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8ndzr" podUID="040d3d5f-c02a-4a70-92af-70700fd9e3c3" Jan 26 00:10:32 crc kubenswrapper[5110]: E0126 00:10:32.317603 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.406568 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.406624 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.406634 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.406679 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.406691 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:32Z","lastTransitionTime":"2026-01-26T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.409839 5110 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.509309 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.509378 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.509391 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.509413 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.509427 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:32Z","lastTransitionTime":"2026-01-26T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.612025 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.612089 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.612107 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.612136 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.612153 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:32Z","lastTransitionTime":"2026-01-26T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.714160 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.714223 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.714237 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.714256 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.714270 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:32Z","lastTransitionTime":"2026-01-26T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.817223 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.817288 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.817301 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.817321 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.817337 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:32Z","lastTransitionTime":"2026-01-26T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.920312 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.920397 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.920411 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.920435 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:32 crc kubenswrapper[5110]: I0126 00:10:32.920446 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:32Z","lastTransitionTime":"2026-01-26T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.023397 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.023488 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.023511 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.023544 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.023567 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:33Z","lastTransitionTime":"2026-01-26T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.125907 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.125975 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.125995 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.126017 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.126030 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:33Z","lastTransitionTime":"2026-01-26T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.229389 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.229438 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.229450 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.229465 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.229476 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:33Z","lastTransitionTime":"2026-01-26T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.330398 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aecd0515-69bc-4e35-9cac-3edd40468f1f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://30ae138b7d197dd54752f6c812a2ea9e7f23f015060d7f1c186abfe872d810a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e8cd7582659a048bc238ce9753c82ac7555698b76159d5cf809cfd7eddfa8d7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8cd7582659a048bc238ce9753c82ac7555698b76159d5cf809cfd7eddfa8d7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.332415 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.332470 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.332480 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.332500 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.332511 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:33Z","lastTransitionTime":"2026-01-26T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.345079 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7e97ec-d5be-4c28-bde0-55be95e4d947\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4c0ec4f07f5bc1820f2ebd1ad017e35f029e5b8166971c3676b9f348147456cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://201fbacf5107a565f88c0b909a2d52c716cda4ea473eb26d2c613fe248ecf434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://74a99588daa785243f933c28a11ac41cbdab81eb6fa291c097f633a57d2e90df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4c060a49b2c9f19914a87af1d9e1860fcf614171795c22fcd1fb629d84d3df5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.357218 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.383902 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.423345 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-77v2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db003609-47aa-4a6a-a7ec-6dbc03ded29a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qwlp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-77v2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.435293 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.435358 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.435375 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.435399 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.435415 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:33Z","lastTransitionTime":"2026-01-26T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.446208 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-qgzzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f11dce8d-124f-497f-96a2-11dd1dddd26d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mlqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4mlqb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-qgzzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.459978 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8ndzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"040d3d5f-c02a-4a70-92af-70700fd9e3c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zsxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zsxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8ndzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.486575 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd7963f-3ea7-4581-8c42-0bd2de1e5540\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://7e90c7cb0f7f52fbb22c7c05dda374c6044ece3a6eb021c481437ea8f5de1298\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://2277f081ca24900f20ca43d5a74890c20bee6e3ec8ad9e4309f66cf96678660e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5241883a28ef3e6af36e6b5a5cca557f4bdb4444cdb5ba3cd00b1e9f28243d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f0a980f89ffe1688954fbe60c16faeaf282be70e3d5a3968f74311a125158488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://62c4e498d0857df9b3c9615022c95ca7ce1497c476eb255d3275604912cf63f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9556ace6d728cde3159c1c8e97e12787903a3279cba1610d592fe244eff5ae5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9556ace6d728cde3159c1c8e97e12787903a3279cba1610d592fe244eff5ae5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8339480b8ee693e38bc6660f0b37482ff26d091553c39db9cf75f4f8b95f96fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8339480b8ee693e38bc6660f0b37482ff26d091553c39db9cf75f4f8b95f96fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://fb3dc595d8f46c045ba87b7515cb2d4576bb2237750740db45c461f69d392c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb3dc595d8f46c045ba87b7515cb2d4576bb2237750740db45c461f69d392c7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.496711 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dafd472-3344-4a8b-a4ef-5242709c94d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://86dfaec2435d8c668da9a0ae44fdc2a617089e161cd08dcdf2559f662c8f2b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c2eb152f744e6c7d629e5eb8590f1c2d570d8f370e9deede5c4436d336c754bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3734fa9576160d905d72573d1984bf03448ada8b30a78ff574ce89cb894632d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://46fb6b4c2b5179db0acdc17fca0fe367346d5e4b61798c9054f4af190e4a8701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46fb6b4c2b5179db0acdc17fca0fe367346d5e4b61798c9054f4af190e4a8701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.505524 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.556318 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.556387 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.556406 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.556386 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-6wqjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d78e4-c2f4-47e1-ae36-612264d9d70c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdlbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6wqjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.556430 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.556628 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:33Z","lastTransitionTime":"2026-01-26T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.564406 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f15bed73-d669-439f-9828-7b952d9bfe65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngjmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngjmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6tpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.573643 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.581223 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p427g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1bb7325-6bd2-4a72-aa8b-79cc2e3d821b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-48xmx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p427g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.595939 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2cba3eb-9a27-49a0-a3e6-645a8853c027\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kp4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bnkth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.605639 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-jh4hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h6rnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jh4hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.618003 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf35b8e-a6c5-479a-860f-db0308fb993b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://a67e63f2b2d217c10ec50b836c9cca85c134aef230c27dbb1ae8139a6dddd72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://edc58dbb97e7a54abf71095739d515edea8205bf2c5e4008b22a69d40b83cb2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://44222dc6a7066064989d8823f6c0f556f4d4d8cac2f2b4ab1462cbb51db8ff07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3202209affc7d7c4869fa3cbbd7b3a166dee850e7789b407826358743c13e508\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3202209affc7d7c4869fa3cbbd7b3a166dee850e7789b407826358743c13e508\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:10:09Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW0126 00:10:09.419281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 00:10:09.419514 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0126 00:10:09.420601 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1392405372/tls.crt::/tmp/serving-cert-1392405372/tls.key\\\\\\\"\\\\nI0126 00:10:09.740266 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:10:09.742210 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:10:09.742231 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:10:09.742265 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:10:09.742270 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:10:09.746497 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:10:09.746538 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:09.746544 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:09.746549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:10:09.746551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:10:09.746556 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:10:09.746559 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 00:10:09.746563 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 00:10:09.749601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:10:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fc0c2347169345d3886bfc369c472fa0e2cf381c3b596a46b7de48d84e955504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2fe6116a5b9da02a5b57aaf57f7bed21247934453d936503dd835d1eac1c23b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe6116a5b9da02a5b57aaf57f7bed21247934453d936503dd835d1eac1c23b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.628645 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.636833 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.659728 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.659771 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.659782 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.659814 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.659828 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:33Z","lastTransitionTime":"2026-01-26T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.761932 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.762069 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.762106 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.762123 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.762133 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:33Z","lastTransitionTime":"2026-01-26T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.865105 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.865188 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.865207 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.865234 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.865271 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:33Z","lastTransitionTime":"2026-01-26T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.968206 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.968251 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.968263 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.968281 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:33 crc kubenswrapper[5110]: I0126 00:10:33.968293 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:33Z","lastTransitionTime":"2026-01-26T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.061134 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.061206 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.061245 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.061305 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:34 crc kubenswrapper[5110]: E0126 00:10:34.061431 5110 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:34 crc kubenswrapper[5110]: E0126 00:10:34.061523 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:34 crc kubenswrapper[5110]: E0126 00:10:34.061543 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:34 crc kubenswrapper[5110]: E0126 00:10:34.061555 5110 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:34 crc kubenswrapper[5110]: E0126 00:10:34.061611 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:42.061576077 +0000 UTC m=+99.290474676 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:34 crc kubenswrapper[5110]: E0126 00:10:34.061651 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:42.061632208 +0000 UTC m=+99.290530817 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:34 crc kubenswrapper[5110]: E0126 00:10:34.061652 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:34 crc kubenswrapper[5110]: E0126 00:10:34.061679 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:34 crc kubenswrapper[5110]: E0126 00:10:34.061691 5110 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:34 crc kubenswrapper[5110]: E0126 00:10:34.061740 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:42.061717431 +0000 UTC m=+99.290616060 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:34 crc kubenswrapper[5110]: E0126 00:10:34.061450 5110 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:34 crc kubenswrapper[5110]: E0126 00:10:34.061815 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:42.061781372 +0000 UTC m=+99.290680181 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.070820 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.070894 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.070911 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.070939 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.070956 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:34Z","lastTransitionTime":"2026-01-26T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.173381 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.173478 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.173525 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.173568 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.173609 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:34Z","lastTransitionTime":"2026-01-26T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.263205 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:34 crc kubenswrapper[5110]: E0126 00:10:34.263488 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:42.26345467 +0000 UTC m=+99.492353319 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.276925 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.276996 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.277022 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.277053 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.277078 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:34Z","lastTransitionTime":"2026-01-26T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.316952 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.317028 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.317100 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8ndzr" Jan 26 00:10:34 crc kubenswrapper[5110]: E0126 00:10:34.317157 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:34 crc kubenswrapper[5110]: E0126 00:10:34.317332 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.317465 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:34 crc kubenswrapper[5110]: E0126 00:10:34.317459 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8ndzr" podUID="040d3d5f-c02a-4a70-92af-70700fd9e3c3" Jan 26 00:10:34 crc kubenswrapper[5110]: E0126 00:10:34.317632 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.364681 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/040d3d5f-c02a-4a70-92af-70700fd9e3c3-metrics-certs\") pod \"network-metrics-daemon-8ndzr\" (UID: \"040d3d5f-c02a-4a70-92af-70700fd9e3c3\") " pod="openshift-multus/network-metrics-daemon-8ndzr" Jan 26 00:10:34 crc kubenswrapper[5110]: E0126 00:10:34.364960 5110 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:10:34 crc kubenswrapper[5110]: E0126 00:10:34.365044 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/040d3d5f-c02a-4a70-92af-70700fd9e3c3-metrics-certs podName:040d3d5f-c02a-4a70-92af-70700fd9e3c3 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:42.365020299 +0000 UTC m=+99.593918928 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/040d3d5f-c02a-4a70-92af-70700fd9e3c3-metrics-certs") pod "network-metrics-daemon-8ndzr" (UID: "040d3d5f-c02a-4a70-92af-70700fd9e3c3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.379873 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.379961 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.379998 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.380033 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.380056 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:34Z","lastTransitionTime":"2026-01-26T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.482962 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.483071 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.483087 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.483112 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.483128 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:34Z","lastTransitionTime":"2026-01-26T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.586979 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.587079 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.587098 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.587784 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.587885 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:34Z","lastTransitionTime":"2026-01-26T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.691690 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.691776 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.691808 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.691831 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.691845 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:34Z","lastTransitionTime":"2026-01-26T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.795082 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.795191 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.795235 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.795274 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.795299 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:34Z","lastTransitionTime":"2026-01-26T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.898099 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.898157 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.898171 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.898189 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:34 crc kubenswrapper[5110]: I0126 00:10:34.898199 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:34Z","lastTransitionTime":"2026-01-26T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.000889 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.000941 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.000951 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.000970 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.000981 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:35Z","lastTransitionTime":"2026-01-26T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.103164 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.103225 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.103237 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.103253 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.103263 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:35Z","lastTransitionTime":"2026-01-26T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.206093 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.206175 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.206190 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.206212 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.206227 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:35Z","lastTransitionTime":"2026-01-26T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.307965 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.308019 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.308028 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.308046 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.308067 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:35Z","lastTransitionTime":"2026-01-26T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.410165 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.410219 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.410229 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.410248 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.410262 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:35Z","lastTransitionTime":"2026-01-26T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.513235 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.513293 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.513305 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.513326 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.513338 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:35Z","lastTransitionTime":"2026-01-26T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.615992 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.616299 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.616386 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.616474 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.616555 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:35Z","lastTransitionTime":"2026-01-26T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.719369 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.719425 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.719435 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.719453 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.719470 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:35Z","lastTransitionTime":"2026-01-26T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.822254 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.822626 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.822753 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.822870 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.822951 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:35Z","lastTransitionTime":"2026-01-26T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.925639 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.925958 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.926090 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.926211 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:35 crc kubenswrapper[5110]: I0126 00:10:35.926315 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:35Z","lastTransitionTime":"2026-01-26T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.029414 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.029497 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.029512 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.029533 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.029545 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:36Z","lastTransitionTime":"2026-01-26T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.132382 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.132462 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.132481 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.132510 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.132530 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:36Z","lastTransitionTime":"2026-01-26T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.235833 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.236124 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.236282 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.236408 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.236538 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:36Z","lastTransitionTime":"2026-01-26T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.317097 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.317168 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.317181 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8ndzr" Jan 26 00:10:36 crc kubenswrapper[5110]: E0126 00:10:36.317303 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.317438 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:36 crc kubenswrapper[5110]: E0126 00:10:36.317644 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:36 crc kubenswrapper[5110]: E0126 00:10:36.317898 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8ndzr" podUID="040d3d5f-c02a-4a70-92af-70700fd9e3c3" Jan 26 00:10:36 crc kubenswrapper[5110]: E0126 00:10:36.318041 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.339297 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.339632 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.339730 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.339885 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.340009 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:36Z","lastTransitionTime":"2026-01-26T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.442779 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.443131 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.443215 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.443303 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.443385 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:36Z","lastTransitionTime":"2026-01-26T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.545871 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.546168 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.546407 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.546483 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.546549 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:36Z","lastTransitionTime":"2026-01-26T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.648979 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.649031 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.649044 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.649065 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.649079 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:36Z","lastTransitionTime":"2026-01-26T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.752257 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.752578 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.752690 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.752786 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.752953 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:36Z","lastTransitionTime":"2026-01-26T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.855414 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.855680 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.855750 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.855832 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.855890 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:36Z","lastTransitionTime":"2026-01-26T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.958274 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.958332 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.958343 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.958364 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:36 crc kubenswrapper[5110]: I0126 00:10:36.958375 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:36Z","lastTransitionTime":"2026-01-26T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.061169 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.061213 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.061224 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.061240 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.061250 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:37Z","lastTransitionTime":"2026-01-26T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.164371 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.164445 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.164456 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.164476 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.164506 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:37Z","lastTransitionTime":"2026-01-26T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.266582 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.266634 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.266647 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.266666 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.266680 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:37Z","lastTransitionTime":"2026-01-26T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.368634 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.368676 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.368684 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.368700 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.368710 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:37Z","lastTransitionTime":"2026-01-26T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.471422 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.471488 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.471507 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.471532 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.471546 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:37Z","lastTransitionTime":"2026-01-26T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.574416 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.574475 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.574488 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.574509 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.574522 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:37Z","lastTransitionTime":"2026-01-26T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.677271 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.677916 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.677936 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.677955 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.677968 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:37Z","lastTransitionTime":"2026-01-26T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.780463 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.780884 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.781174 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.781384 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.781586 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:37Z","lastTransitionTime":"2026-01-26T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.884109 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.884214 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.884245 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.884287 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.884354 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:37Z","lastTransitionTime":"2026-01-26T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.987455 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.987520 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.987534 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.987556 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:37 crc kubenswrapper[5110]: I0126 00:10:37.987571 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:37Z","lastTransitionTime":"2026-01-26T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.089830 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.090034 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.090131 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.090277 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.090395 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:38Z","lastTransitionTime":"2026-01-26T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.193571 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.193656 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.193678 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.193714 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.193736 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:38Z","lastTransitionTime":"2026-01-26T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.296749 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.296830 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.296848 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.296869 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.296885 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:38Z","lastTransitionTime":"2026-01-26T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.316630 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.316864 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8ndzr" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.317177 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:38 crc kubenswrapper[5110]: E0126 00:10:38.317868 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8ndzr" podUID="040d3d5f-c02a-4a70-92af-70700fd9e3c3" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.318069 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.318241 5110 scope.go:117] "RemoveContainer" containerID="3202209affc7d7c4869fa3cbbd7b3a166dee850e7789b407826358743c13e508" Jan 26 00:10:38 crc kubenswrapper[5110]: E0126 00:10:38.318282 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:38 crc kubenswrapper[5110]: E0126 00:10:38.318540 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 26 00:10:38 crc kubenswrapper[5110]: E0126 00:10:38.318552 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:38 crc kubenswrapper[5110]: E0126 00:10:38.318748 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.400404 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.400471 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.400491 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.400513 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.400526 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:38Z","lastTransitionTime":"2026-01-26T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.503482 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.503538 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.503548 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.503571 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.503588 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:38Z","lastTransitionTime":"2026-01-26T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.607052 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.607140 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.607152 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.607173 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.607187 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:38Z","lastTransitionTime":"2026-01-26T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.709721 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.709771 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.709787 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.709824 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.709837 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:38Z","lastTransitionTime":"2026-01-26T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.724739 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"15cd76855fa2ad9102c4d7f0f05229a36fe9c0124d99fb0cdaca585f7b8f4916"} Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.724829 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"2a461a8f807dadea6655a81030123e41967f0bc0421307c629a2fc800eaa64de"} Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.726376 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-qgzzt" event={"ID":"f11dce8d-124f-497f-96a2-11dd1dddd26d","Type":"ContainerStarted","Data":"4194f5c9379f05c335d58c418cfd83b43644db004cb9b4c24d7d124312a94844"} Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.812823 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.813172 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.813192 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.813212 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.813224 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:38Z","lastTransitionTime":"2026-01-26T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.899745 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=12.899709845 podStartE2EDuration="12.899709845s" podCreationTimestamp="2026-01-26 00:10:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:38.898966874 +0000 UTC m=+96.127865483" watchObservedRunningTime="2026-01-26 00:10:38.899709845 +0000 UTC m=+96.128608494" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.915731 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.915866 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.915882 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.915904 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.915917 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:38Z","lastTransitionTime":"2026-01-26T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:38 crc kubenswrapper[5110]: I0126 00:10:38.918325 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=12.91831029 podStartE2EDuration="12.91831029s" podCreationTimestamp="2026-01-26 00:10:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:38.918212757 +0000 UTC m=+96.147111366" watchObservedRunningTime="2026-01-26 00:10:38.91831029 +0000 UTC m=+96.147208899" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.018388 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.018446 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.018456 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.018472 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.018483 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:39Z","lastTransitionTime":"2026-01-26T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.035354 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=13.035332426 podStartE2EDuration="13.035332426s" podCreationTimestamp="2026-01-26 00:10:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:39.034704508 +0000 UTC m=+96.263603127" watchObservedRunningTime="2026-01-26 00:10:39.035332426 +0000 UTC m=+96.264231035" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.051272 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=13.051227575 podStartE2EDuration="13.051227575s" podCreationTimestamp="2026-01-26 00:10:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:39.050943497 +0000 UTC m=+96.279842156" watchObservedRunningTime="2026-01-26 00:10:39.051227575 +0000 UTC m=+96.280126184" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.120488 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.120548 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.120561 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.120581 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.120608 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:39Z","lastTransitionTime":"2026-01-26T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.222968 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.223042 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.223055 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.223077 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.223091 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:39Z","lastTransitionTime":"2026-01-26T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.324993 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.325066 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.325088 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.325112 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.325131 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:39Z","lastTransitionTime":"2026-01-26T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.428889 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.428958 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.428972 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.428994 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.429007 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:39Z","lastTransitionTime":"2026-01-26T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.532300 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.532352 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.532363 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.532386 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.532398 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:39Z","lastTransitionTime":"2026-01-26T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.634830 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.634878 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.634888 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.634906 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.634917 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:39Z","lastTransitionTime":"2026-01-26T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.739383 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.739469 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.739485 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.739506 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.739544 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:39Z","lastTransitionTime":"2026-01-26T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.818294 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-qgzzt" event={"ID":"f11dce8d-124f-497f-96a2-11dd1dddd26d","Type":"ContainerStarted","Data":"ebe5a5f53b62a812a3030d77b983eb270e4cd2147fb0a3329f9013f86244cffe"} Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.890011 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.890066 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.890082 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.890104 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.890117 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:39Z","lastTransitionTime":"2026-01-26T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.993922 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.993964 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.993975 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.993993 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:39 crc kubenswrapper[5110]: I0126 00:10:39.994006 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:39Z","lastTransitionTime":"2026-01-26T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.096176 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.096228 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.096240 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.096259 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.096273 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:40Z","lastTransitionTime":"2026-01-26T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.234503 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.234560 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.234572 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.234598 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.234612 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:40Z","lastTransitionTime":"2026-01-26T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.321999 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:40 crc kubenswrapper[5110]: E0126 00:10:40.322186 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.322642 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:40 crc kubenswrapper[5110]: E0126 00:10:40.322704 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.322760 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8ndzr" Jan 26 00:10:40 crc kubenswrapper[5110]: E0126 00:10:40.322835 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8ndzr" podUID="040d3d5f-c02a-4a70-92af-70700fd9e3c3" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.322889 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:40 crc kubenswrapper[5110]: E0126 00:10:40.322932 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.337387 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.337429 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.337437 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.337453 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.337465 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:40Z","lastTransitionTime":"2026-01-26T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.440128 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.440194 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.440209 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.440230 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.440244 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:40Z","lastTransitionTime":"2026-01-26T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.543433 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.543509 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.543526 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.543553 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.543569 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:40Z","lastTransitionTime":"2026-01-26T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.646138 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.646200 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.646212 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.646228 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.646240 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:40Z","lastTransitionTime":"2026-01-26T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.748975 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.749044 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.749057 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.749077 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.749092 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:40Z","lastTransitionTime":"2026-01-26T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.822691 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-6wqjg" event={"ID":"cf1d78e4-c2f4-47e1-ae36-612264d9d70c","Type":"ContainerStarted","Data":"bb0ad720de603a5dcbb42eec47ecf0d0e81a6966a9e7e11f64480952bb20f716"} Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.825288 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"5d00508e59f28518c7c0aca1cd1f3497b03495e32bd7f3ada27a095f555e0fdb"} Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.829070 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-77v2r" event={"ID":"db003609-47aa-4a6a-a7ec-6dbc03ded29a","Type":"ContainerStarted","Data":"e47d57ec54e986738277f2cee394a42fb2a28345059a90bff0b4b4bae61d1460"} Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.842553 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-qgzzt" podStartSLOduration=77.84252552 podStartE2EDuration="1m17.84252552s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:39.841071528 +0000 UTC m=+97.069970137" watchObservedRunningTime="2026-01-26 00:10:40.84252552 +0000 UTC m=+98.071424119" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.843564 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-6wqjg" podStartSLOduration=77.843552119 podStartE2EDuration="1m17.843552119s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:40.841852161 +0000 UTC m=+98.070750780" watchObservedRunningTime="2026-01-26 00:10:40.843552119 +0000 UTC m=+98.072450718" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.852518 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.852597 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.852609 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.852627 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.852639 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:40Z","lastTransitionTime":"2026-01-26T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.955355 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.955434 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.955446 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.955466 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:40 crc kubenswrapper[5110]: I0126 00:10:40.955482 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:40Z","lastTransitionTime":"2026-01-26T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.058466 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.058520 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.058531 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.058552 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.058563 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:41Z","lastTransitionTime":"2026-01-26T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.161579 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.161643 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.161654 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.161675 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.161688 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:41Z","lastTransitionTime":"2026-01-26T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.264478 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.264533 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.264549 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.264569 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.264581 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:41Z","lastTransitionTime":"2026-01-26T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.367196 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.367258 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.367271 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.367290 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.367305 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:41Z","lastTransitionTime":"2026-01-26T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.469867 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.469943 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.469957 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.469978 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.469994 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:41Z","lastTransitionTime":"2026-01-26T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.572457 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.572531 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.572553 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.572580 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.572602 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:41Z","lastTransitionTime":"2026-01-26T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.675072 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.675157 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.675170 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.675191 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.675207 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:41Z","lastTransitionTime":"2026-01-26T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.777983 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.778084 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.778107 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.778134 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.778154 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:41Z","lastTransitionTime":"2026-01-26T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.835720 5110 generic.go:358] "Generic (PLEG): container finished" podID="db003609-47aa-4a6a-a7ec-6dbc03ded29a" containerID="e47d57ec54e986738277f2cee394a42fb2a28345059a90bff0b4b4bae61d1460" exitCode=0 Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.835784 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-77v2r" event={"ID":"db003609-47aa-4a6a-a7ec-6dbc03ded29a","Type":"ContainerDied","Data":"e47d57ec54e986738277f2cee394a42fb2a28345059a90bff0b4b4bae61d1460"} Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.881750 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.881855 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.881877 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.881924 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.881941 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:41Z","lastTransitionTime":"2026-01-26T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.988993 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.989057 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.989069 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.989089 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:41 crc kubenswrapper[5110]: I0126 00:10:41.989103 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:41Z","lastTransitionTime":"2026-01-26T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:42 crc kubenswrapper[5110]: I0126 00:10:42.042699 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:42 crc kubenswrapper[5110]: I0126 00:10:42.042756 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:42 crc kubenswrapper[5110]: I0126 00:10:42.042767 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:42 crc kubenswrapper[5110]: I0126 00:10:42.042785 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:42 crc kubenswrapper[5110]: I0126 00:10:42.042816 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:42Z","lastTransitionTime":"2026-01-26T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:42 crc kubenswrapper[5110]: I0126 00:10:42.087613 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:42 crc kubenswrapper[5110]: I0126 00:10:42.087671 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:42 crc kubenswrapper[5110]: I0126 00:10:42.087693 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:42 crc kubenswrapper[5110]: I0126 00:10:42.087727 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:42 crc kubenswrapper[5110]: E0126 00:10:42.087861 5110 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:42 crc kubenswrapper[5110]: E0126 00:10:42.087919 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:58.087902922 +0000 UTC m=+115.316801531 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:42 crc kubenswrapper[5110]: E0126 00:10:42.088223 5110 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:42 crc kubenswrapper[5110]: E0126 00:10:42.088261 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:58.088252912 +0000 UTC m=+115.317151521 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:42 crc kubenswrapper[5110]: E0126 00:10:42.088310 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:42 crc kubenswrapper[5110]: E0126 00:10:42.088321 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:42 crc kubenswrapper[5110]: E0126 00:10:42.088330 5110 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:42 crc kubenswrapper[5110]: E0126 00:10:42.088357 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:58.088349465 +0000 UTC m=+115.317248074 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:42 crc kubenswrapper[5110]: E0126 00:10:42.088395 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:42 crc kubenswrapper[5110]: E0126 00:10:42.088402 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:42 crc kubenswrapper[5110]: E0126 00:10:42.088409 5110 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:42 crc kubenswrapper[5110]: E0126 00:10:42.088428 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:58.088422657 +0000 UTC m=+115.317321266 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:42 crc kubenswrapper[5110]: I0126 00:10:42.090537 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bgrv"] Jan 26 00:10:42 crc kubenswrapper[5110]: I0126 00:10:42.246818 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bgrv" Jan 26 00:10:42 crc kubenswrapper[5110]: I0126 00:10:42.249356 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 26 00:10:42 crc kubenswrapper[5110]: I0126 00:10:42.249606 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 26 00:10:42 crc kubenswrapper[5110]: I0126 00:10:42.249767 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 26 00:10:42 crc kubenswrapper[5110]: I0126 00:10:42.249926 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 26 00:10:42 crc kubenswrapper[5110]: I0126 00:10:42.281077 5110 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Jan 26 00:10:42 crc kubenswrapper[5110]: I0126 00:10:42.288691 5110 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 26 00:10:42 crc kubenswrapper[5110]: I0126 00:10:42.289234 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:42 crc kubenswrapper[5110]: E0126 00:10:42.289372 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:58.289340143 +0000 UTC m=+115.518238752 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:42 crc kubenswrapper[5110]: I0126 00:10:42.320126 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:42 crc kubenswrapper[5110]: I0126 00:10:42.320161 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8ndzr" Jan 26 00:10:42 crc kubenswrapper[5110]: E0126 00:10:42.320302 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:42 crc kubenswrapper[5110]: I0126 00:10:42.321020 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:42 crc kubenswrapper[5110]: I0126 00:10:42.321586 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:42 crc kubenswrapper[5110]: E0126 00:10:42.321636 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8ndzr" podUID="040d3d5f-c02a-4a70-92af-70700fd9e3c3" Jan 26 00:10:42 crc kubenswrapper[5110]: E0126 00:10:42.321680 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:42 crc kubenswrapper[5110]: E0126 00:10:42.322555 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:42 crc kubenswrapper[5110]: I0126 00:10:42.391303 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ac8041a-d3b8-4b54-b6a2-d62d44e4aa89-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-7bgrv\" (UID: \"0ac8041a-d3b8-4b54-b6a2-d62d44e4aa89\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bgrv" Jan 26 00:10:42 crc kubenswrapper[5110]: I0126 00:10:42.391367 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0ac8041a-d3b8-4b54-b6a2-d62d44e4aa89-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-7bgrv\" (UID: \"0ac8041a-d3b8-4b54-b6a2-d62d44e4aa89\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bgrv" Jan 26 00:10:42 crc kubenswrapper[5110]: I0126 00:10:42.391397 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0ac8041a-d3b8-4b54-b6a2-d62d44e4aa89-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-7bgrv\" (UID: \"0ac8041a-d3b8-4b54-b6a2-d62d44e4aa89\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bgrv" Jan 26 00:10:42 crc kubenswrapper[5110]: I0126 00:10:42.391434 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/040d3d5f-c02a-4a70-92af-70700fd9e3c3-metrics-certs\") pod \"network-metrics-daemon-8ndzr\" (UID: \"040d3d5f-c02a-4a70-92af-70700fd9e3c3\") " pod="openshift-multus/network-metrics-daemon-8ndzr" Jan 26 00:10:42 crc kubenswrapper[5110]: I0126 00:10:42.391488 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0ac8041a-d3b8-4b54-b6a2-d62d44e4aa89-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-7bgrv\" (UID: \"0ac8041a-d3b8-4b54-b6a2-d62d44e4aa89\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bgrv" Jan 26 00:10:42 crc kubenswrapper[5110]: I0126 00:10:42.391508 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ac8041a-d3b8-4b54-b6a2-d62d44e4aa89-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-7bgrv\" (UID: \"0ac8041a-d3b8-4b54-b6a2-d62d44e4aa89\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bgrv" Jan 26 00:10:42 crc kubenswrapper[5110]: E0126 00:10:42.391688 5110 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:10:42 crc kubenswrapper[5110]: E0126 00:10:42.391762 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/040d3d5f-c02a-4a70-92af-70700fd9e3c3-metrics-certs podName:040d3d5f-c02a-4a70-92af-70700fd9e3c3 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:58.391736996 +0000 UTC m=+115.620635615 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/040d3d5f-c02a-4a70-92af-70700fd9e3c3-metrics-certs") pod "network-metrics-daemon-8ndzr" (UID: "040d3d5f-c02a-4a70-92af-70700fd9e3c3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:10:42 crc kubenswrapper[5110]: I0126 00:10:42.492947 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0ac8041a-d3b8-4b54-b6a2-d62d44e4aa89-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-7bgrv\" (UID: \"0ac8041a-d3b8-4b54-b6a2-d62d44e4aa89\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bgrv" Jan 26 00:10:42 crc kubenswrapper[5110]: I0126 00:10:42.493442 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0ac8041a-d3b8-4b54-b6a2-d62d44e4aa89-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-7bgrv\" (UID: \"0ac8041a-d3b8-4b54-b6a2-d62d44e4aa89\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bgrv" Jan 26 00:10:42 crc kubenswrapper[5110]: I0126 00:10:42.493606 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0ac8041a-d3b8-4b54-b6a2-d62d44e4aa89-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-7bgrv\" (UID: \"0ac8041a-d3b8-4b54-b6a2-d62d44e4aa89\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bgrv" Jan 26 00:10:42 crc kubenswrapper[5110]: I0126 00:10:42.493681 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ac8041a-d3b8-4b54-b6a2-d62d44e4aa89-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-7bgrv\" (UID: \"0ac8041a-d3b8-4b54-b6a2-d62d44e4aa89\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bgrv" Jan 26 00:10:42 crc kubenswrapper[5110]: I0126 00:10:42.493829 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ac8041a-d3b8-4b54-b6a2-d62d44e4aa89-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-7bgrv\" (UID: \"0ac8041a-d3b8-4b54-b6a2-d62d44e4aa89\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bgrv" Jan 26 00:10:42 crc kubenswrapper[5110]: I0126 00:10:42.493238 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0ac8041a-d3b8-4b54-b6a2-d62d44e4aa89-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-7bgrv\" (UID: \"0ac8041a-d3b8-4b54-b6a2-d62d44e4aa89\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bgrv" Jan 26 00:10:42 crc kubenswrapper[5110]: I0126 00:10:42.494962 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0ac8041a-d3b8-4b54-b6a2-d62d44e4aa89-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-7bgrv\" (UID: \"0ac8041a-d3b8-4b54-b6a2-d62d44e4aa89\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bgrv" Jan 26 00:10:42 crc kubenswrapper[5110]: I0126 00:10:42.496130 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0ac8041a-d3b8-4b54-b6a2-d62d44e4aa89-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-7bgrv\" (UID: \"0ac8041a-d3b8-4b54-b6a2-d62d44e4aa89\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bgrv" Jan 26 00:10:42 crc kubenswrapper[5110]: I0126 00:10:42.502211 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ac8041a-d3b8-4b54-b6a2-d62d44e4aa89-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-7bgrv\" (UID: \"0ac8041a-d3b8-4b54-b6a2-d62d44e4aa89\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bgrv" Jan 26 00:10:42 crc kubenswrapper[5110]: I0126 00:10:42.515901 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ac8041a-d3b8-4b54-b6a2-d62d44e4aa89-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-7bgrv\" (UID: \"0ac8041a-d3b8-4b54-b6a2-d62d44e4aa89\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bgrv" Jan 26 00:10:42 crc kubenswrapper[5110]: I0126 00:10:42.559332 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bgrv" Jan 26 00:10:42 crc kubenswrapper[5110]: I0126 00:10:42.841629 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-77v2r" event={"ID":"db003609-47aa-4a6a-a7ec-6dbc03ded29a","Type":"ContainerStarted","Data":"dc3101dd3c66431e1bf6dafcda59008b81eafdd7d4e0aed5a2018955582f119e"} Jan 26 00:10:42 crc kubenswrapper[5110]: I0126 00:10:42.842675 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bgrv" event={"ID":"0ac8041a-d3b8-4b54-b6a2-d62d44e4aa89","Type":"ContainerStarted","Data":"288273e1da790fda65a957316523819be7b51a57f8519c755289119ef6a082dd"} Jan 26 00:10:43 crc kubenswrapper[5110]: I0126 00:10:43.876218 5110 generic.go:358] "Generic (PLEG): container finished" podID="db003609-47aa-4a6a-a7ec-6dbc03ded29a" containerID="dc3101dd3c66431e1bf6dafcda59008b81eafdd7d4e0aed5a2018955582f119e" exitCode=0 Jan 26 00:10:43 crc kubenswrapper[5110]: I0126 00:10:43.876506 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-77v2r" event={"ID":"db003609-47aa-4a6a-a7ec-6dbc03ded29a","Type":"ContainerDied","Data":"dc3101dd3c66431e1bf6dafcda59008b81eafdd7d4e0aed5a2018955582f119e"} Jan 26 00:10:43 crc kubenswrapper[5110]: I0126 00:10:43.881198 5110 generic.go:358] "Generic (PLEG): container finished" podID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerID="0b89cd147747fb5fc3fe38889afc1a906d4bc2709d0fa7891d3eb28ef0935f73" exitCode=0 Jan 26 00:10:43 crc kubenswrapper[5110]: I0126 00:10:43.881350 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" event={"ID":"c2cba3eb-9a27-49a0-a3e6-645a8853c027","Type":"ContainerDied","Data":"0b89cd147747fb5fc3fe38889afc1a906d4bc2709d0fa7891d3eb28ef0935f73"} Jan 26 00:10:43 crc kubenswrapper[5110]: I0126 00:10:43.886244 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jh4hk" event={"ID":"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec","Type":"ContainerStarted","Data":"d02f14fb13f68157b9c184d3d2944b69d7c9279922e8dd561e7634655b83d0bd"} Jan 26 00:10:43 crc kubenswrapper[5110]: I0126 00:10:43.891258 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" event={"ID":"f15bed73-d669-439f-9828-7b952d9bfe65","Type":"ContainerStarted","Data":"8d9183336fcd7f82da30e7c47ac4c55b5870284a52f6c97a6046736bea666e8f"} Jan 26 00:10:44 crc kubenswrapper[5110]: I0126 00:10:44.316528 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:44 crc kubenswrapper[5110]: E0126 00:10:44.316717 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:44 crc kubenswrapper[5110]: I0126 00:10:44.317307 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8ndzr" Jan 26 00:10:44 crc kubenswrapper[5110]: E0126 00:10:44.317396 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8ndzr" podUID="040d3d5f-c02a-4a70-92af-70700fd9e3c3" Jan 26 00:10:44 crc kubenswrapper[5110]: I0126 00:10:44.317473 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:44 crc kubenswrapper[5110]: E0126 00:10:44.317539 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:44 crc kubenswrapper[5110]: I0126 00:10:44.317610 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:44 crc kubenswrapper[5110]: E0126 00:10:44.317677 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:44 crc kubenswrapper[5110]: I0126 00:10:44.897422 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bgrv" event={"ID":"0ac8041a-d3b8-4b54-b6a2-d62d44e4aa89","Type":"ContainerStarted","Data":"de9e2f6daac62df859ccfa3bddecc18641afdac1c5c9d326399392a6a27b7820"} Jan 26 00:10:44 crc kubenswrapper[5110]: I0126 00:10:44.909546 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-p427g" event={"ID":"b1bb7325-6bd2-4a72-aa8b-79cc2e3d821b","Type":"ContainerStarted","Data":"993c2550d2af4ea20814f6debd6a7a09a172d13d576225fc50bafd89fb1059dd"} Jan 26 00:10:44 crc kubenswrapper[5110]: I0126 00:10:44.938994 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7bgrv" podStartSLOduration=81.938966767 podStartE2EDuration="1m21.938966767s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:44.937744932 +0000 UTC m=+102.166643541" watchObservedRunningTime="2026-01-26 00:10:44.938966767 +0000 UTC m=+102.167865376" Jan 26 00:10:44 crc kubenswrapper[5110]: I0126 00:10:44.939189 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-jh4hk" podStartSLOduration=81.939184423 podStartE2EDuration="1m21.939184423s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:44.045713452 +0000 UTC m=+101.274612071" watchObservedRunningTime="2026-01-26 00:10:44.939184423 +0000 UTC m=+102.168083032" Jan 26 00:10:44 crc kubenswrapper[5110]: I0126 00:10:44.939854 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" event={"ID":"f15bed73-d669-439f-9828-7b952d9bfe65","Type":"ContainerStarted","Data":"269a153239e8d4e7484b70b00106d196ab1d79e23f021c752f602f6bbce05ee2"} Jan 26 00:10:44 crc kubenswrapper[5110]: I0126 00:10:44.945686 5110 generic.go:358] "Generic (PLEG): container finished" podID="db003609-47aa-4a6a-a7ec-6dbc03ded29a" containerID="f0c957611058587b58f209f53b6124321527570c8b1a82acd6b703427833ed02" exitCode=0 Jan 26 00:10:44 crc kubenswrapper[5110]: I0126 00:10:44.945827 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-77v2r" event={"ID":"db003609-47aa-4a6a-a7ec-6dbc03ded29a","Type":"ContainerDied","Data":"f0c957611058587b58f209f53b6124321527570c8b1a82acd6b703427833ed02"} Jan 26 00:10:44 crc kubenswrapper[5110]: I0126 00:10:44.949962 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" event={"ID":"c2cba3eb-9a27-49a0-a3e6-645a8853c027","Type":"ContainerStarted","Data":"09bb5b8ad1cf0163b24ae49c3066899e402857a59abfbd22df10acfb93b7bf8e"} Jan 26 00:10:44 crc kubenswrapper[5110]: I0126 00:10:44.949993 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" event={"ID":"c2cba3eb-9a27-49a0-a3e6-645a8853c027","Type":"ContainerStarted","Data":"fca30efd8868709834f692b62596ed5648db9292a5e2f2157a4b81cbf816fa1b"} Jan 26 00:10:44 crc kubenswrapper[5110]: I0126 00:10:44.982727 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-p427g" podStartSLOduration=81.982697682 podStartE2EDuration="1m21.982697682s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:44.956851522 +0000 UTC m=+102.185750131" watchObservedRunningTime="2026-01-26 00:10:44.982697682 +0000 UTC m=+102.211596291" Jan 26 00:10:46 crc kubenswrapper[5110]: I0126 00:10:46.095292 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-77v2r" event={"ID":"db003609-47aa-4a6a-a7ec-6dbc03ded29a","Type":"ContainerStarted","Data":"bd3f7e17a6a3e6ab13a01a47a04a92ec44ede49a46654d71194cbe9634034511"} Jan 26 00:10:46 crc kubenswrapper[5110]: I0126 00:10:46.105697 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" event={"ID":"c2cba3eb-9a27-49a0-a3e6-645a8853c027","Type":"ContainerStarted","Data":"acd9792db6cfdc68e20fa60da9d63fe0c78224b6bcbfc82ff654a96ac3280eb9"} Jan 26 00:10:46 crc kubenswrapper[5110]: I0126 00:10:46.105778 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" event={"ID":"c2cba3eb-9a27-49a0-a3e6-645a8853c027","Type":"ContainerStarted","Data":"f8888c50ea7c1d1cfd30dba1874ce8fabdfe27b7ffcddb77dbf0caad4fb053d0"} Jan 26 00:10:46 crc kubenswrapper[5110]: I0126 00:10:46.105790 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" event={"ID":"c2cba3eb-9a27-49a0-a3e6-645a8853c027","Type":"ContainerStarted","Data":"c84b640339be8514edb04b876af2ccb55d2474165d11ef323d55545ef3797218"} Jan 26 00:10:46 crc kubenswrapper[5110]: I0126 00:10:46.127939 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" podStartSLOduration=83.127912544 podStartE2EDuration="1m23.127912544s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:44.99501894 +0000 UTC m=+102.223917559" watchObservedRunningTime="2026-01-26 00:10:46.127912544 +0000 UTC m=+103.356811153" Jan 26 00:10:46 crc kubenswrapper[5110]: I0126 00:10:46.316571 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:46 crc kubenswrapper[5110]: I0126 00:10:46.316635 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8ndzr" Jan 26 00:10:46 crc kubenswrapper[5110]: I0126 00:10:46.316743 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:46 crc kubenswrapper[5110]: E0126 00:10:46.316768 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:46 crc kubenswrapper[5110]: E0126 00:10:46.316846 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8ndzr" podUID="040d3d5f-c02a-4a70-92af-70700fd9e3c3" Jan 26 00:10:46 crc kubenswrapper[5110]: E0126 00:10:46.317002 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:46 crc kubenswrapper[5110]: I0126 00:10:46.317082 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:46 crc kubenswrapper[5110]: E0126 00:10:46.317199 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:47 crc kubenswrapper[5110]: I0126 00:10:47.112142 5110 generic.go:358] "Generic (PLEG): container finished" podID="db003609-47aa-4a6a-a7ec-6dbc03ded29a" containerID="bd3f7e17a6a3e6ab13a01a47a04a92ec44ede49a46654d71194cbe9634034511" exitCode=0 Jan 26 00:10:47 crc kubenswrapper[5110]: I0126 00:10:47.112219 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-77v2r" event={"ID":"db003609-47aa-4a6a-a7ec-6dbc03ded29a","Type":"ContainerDied","Data":"bd3f7e17a6a3e6ab13a01a47a04a92ec44ede49a46654d71194cbe9634034511"} Jan 26 00:10:47 crc kubenswrapper[5110]: I0126 00:10:47.115689 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"11fc578dfa1dc20abf48ad3e1625df601a256c1201b26b7f280b70b5078bdd5c"} Jan 26 00:10:48 crc kubenswrapper[5110]: I0126 00:10:48.126286 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-77v2r" event={"ID":"db003609-47aa-4a6a-a7ec-6dbc03ded29a","Type":"ContainerStarted","Data":"3948c98ef0be68c1658b791e60a4b5ac7a304ec2f1e8e0acc7f6508df636f09f"} Jan 26 00:10:48 crc kubenswrapper[5110]: I0126 00:10:48.133244 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" event={"ID":"c2cba3eb-9a27-49a0-a3e6-645a8853c027","Type":"ContainerStarted","Data":"181993f3f162482e955f9cb8eac78e2a3118d3a929a19cf4b8280ea79af4af26"} Jan 26 00:10:48 crc kubenswrapper[5110]: I0126 00:10:48.316783 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8ndzr" Jan 26 00:10:48 crc kubenswrapper[5110]: I0126 00:10:48.316849 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:48 crc kubenswrapper[5110]: E0126 00:10:48.317120 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8ndzr" podUID="040d3d5f-c02a-4a70-92af-70700fd9e3c3" Jan 26 00:10:48 crc kubenswrapper[5110]: E0126 00:10:48.317184 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:48 crc kubenswrapper[5110]: I0126 00:10:48.317226 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:48 crc kubenswrapper[5110]: E0126 00:10:48.317391 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:48 crc kubenswrapper[5110]: I0126 00:10:48.317418 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:48 crc kubenswrapper[5110]: E0126 00:10:48.317541 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:50 crc kubenswrapper[5110]: I0126 00:10:50.146977 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" event={"ID":"c2cba3eb-9a27-49a0-a3e6-645a8853c027","Type":"ContainerStarted","Data":"828271507f6fc03318ff392209235e18427f8b53ccd91dc2bcc360db0f0827e8"} Jan 26 00:10:50 crc kubenswrapper[5110]: I0126 00:10:50.316462 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8ndzr" Jan 26 00:10:50 crc kubenswrapper[5110]: I0126 00:10:50.316530 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:50 crc kubenswrapper[5110]: I0126 00:10:50.316551 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:50 crc kubenswrapper[5110]: E0126 00:10:50.317283 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:50 crc kubenswrapper[5110]: I0126 00:10:50.316662 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:50 crc kubenswrapper[5110]: E0126 00:10:50.317055 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8ndzr" podUID="040d3d5f-c02a-4a70-92af-70700fd9e3c3" Jan 26 00:10:50 crc kubenswrapper[5110]: E0126 00:10:50.317405 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:50 crc kubenswrapper[5110]: E0126 00:10:50.317748 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:52 crc kubenswrapper[5110]: I0126 00:10:52.316571 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:52 crc kubenswrapper[5110]: I0126 00:10:52.316585 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8ndzr" Jan 26 00:10:52 crc kubenswrapper[5110]: I0126 00:10:52.316628 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:52 crc kubenswrapper[5110]: I0126 00:10:52.316647 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:52 crc kubenswrapper[5110]: E0126 00:10:52.317378 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8ndzr" podUID="040d3d5f-c02a-4a70-92af-70700fd9e3c3" Jan 26 00:10:52 crc kubenswrapper[5110]: E0126 00:10:52.317709 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:52 crc kubenswrapper[5110]: E0126 00:10:52.317981 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:52 crc kubenswrapper[5110]: I0126 00:10:52.318013 5110 scope.go:117] "RemoveContainer" containerID="3202209affc7d7c4869fa3cbbd7b3a166dee850e7789b407826358743c13e508" Jan 26 00:10:52 crc kubenswrapper[5110]: E0126 00:10:52.318158 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:54 crc kubenswrapper[5110]: I0126 00:10:54.163401 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 26 00:10:54 crc kubenswrapper[5110]: I0126 00:10:54.165738 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"ca0a9c3e32a579de04d542b7d2c66ce1ea575b229700d8fdae6cdb109b6ab46e"} Jan 26 00:10:54 crc kubenswrapper[5110]: I0126 00:10:54.166286 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:10:54 crc kubenswrapper[5110]: I0126 00:10:54.169117 5110 generic.go:358] "Generic (PLEG): container finished" podID="db003609-47aa-4a6a-a7ec-6dbc03ded29a" containerID="3948c98ef0be68c1658b791e60a4b5ac7a304ec2f1e8e0acc7f6508df636f09f" exitCode=0 Jan 26 00:10:54 crc kubenswrapper[5110]: I0126 00:10:54.169208 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-77v2r" event={"ID":"db003609-47aa-4a6a-a7ec-6dbc03ded29a","Type":"ContainerDied","Data":"3948c98ef0be68c1658b791e60a4b5ac7a304ec2f1e8e0acc7f6508df636f09f"} Jan 26 00:10:54 crc kubenswrapper[5110]: I0126 00:10:54.175892 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" event={"ID":"c2cba3eb-9a27-49a0-a3e6-645a8853c027","Type":"ContainerStarted","Data":"ec80b085afc1e9025e262f77c6de6c00c3a16f42f250a71784befbdaa9ec0cab"} Jan 26 00:10:54 crc kubenswrapper[5110]: I0126 00:10:54.192219 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=28.192195843 podStartE2EDuration="28.192195843s" podCreationTimestamp="2026-01-26 00:10:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:54.191562486 +0000 UTC m=+111.420461095" watchObservedRunningTime="2026-01-26 00:10:54.192195843 +0000 UTC m=+111.421094452" Jan 26 00:10:54 crc kubenswrapper[5110]: I0126 00:10:54.317133 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8ndzr" Jan 26 00:10:54 crc kubenswrapper[5110]: E0126 00:10:54.317346 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8ndzr" podUID="040d3d5f-c02a-4a70-92af-70700fd9e3c3" Jan 26 00:10:54 crc kubenswrapper[5110]: I0126 00:10:54.317988 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:54 crc kubenswrapper[5110]: E0126 00:10:54.318093 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:54 crc kubenswrapper[5110]: I0126 00:10:54.318159 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:54 crc kubenswrapper[5110]: E0126 00:10:54.318260 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:54 crc kubenswrapper[5110]: I0126 00:10:54.318301 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:54 crc kubenswrapper[5110]: E0126 00:10:54.318472 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:56 crc kubenswrapper[5110]: I0126 00:10:56.316637 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8ndzr" Jan 26 00:10:56 crc kubenswrapper[5110]: I0126 00:10:56.316679 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:56 crc kubenswrapper[5110]: I0126 00:10:56.316672 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:56 crc kubenswrapper[5110]: E0126 00:10:56.316853 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8ndzr" podUID="040d3d5f-c02a-4a70-92af-70700fd9e3c3" Jan 26 00:10:56 crc kubenswrapper[5110]: E0126 00:10:56.316986 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:56 crc kubenswrapper[5110]: E0126 00:10:56.317105 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:56 crc kubenswrapper[5110]: I0126 00:10:56.317237 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:56 crc kubenswrapper[5110]: E0126 00:10:56.317494 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:56 crc kubenswrapper[5110]: I0126 00:10:56.863626 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:56 crc kubenswrapper[5110]: I0126 00:10:56.863785 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:56 crc kubenswrapper[5110]: I0126 00:10:56.864590 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:56 crc kubenswrapper[5110]: I0126 00:10:56.900750 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" podStartSLOduration=93.9007083 podStartE2EDuration="1m33.9007083s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:56.899866336 +0000 UTC m=+114.128764965" watchObservedRunningTime="2026-01-26 00:10:56.9007083 +0000 UTC m=+114.129606909" Jan 26 00:10:58 crc kubenswrapper[5110]: I0126 00:10:58.312598 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:58 crc kubenswrapper[5110]: E0126 00:10:58.316198 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:30.316111476 +0000 UTC m=+147.545010135 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:58 crc kubenswrapper[5110]: I0126 00:10:58.316463 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:58 crc kubenswrapper[5110]: I0126 00:10:58.316769 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:58 crc kubenswrapper[5110]: I0126 00:10:58.316875 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:58 crc kubenswrapper[5110]: I0126 00:10:58.317252 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:58 crc kubenswrapper[5110]: E0126 00:10:58.318296 5110 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:58 crc kubenswrapper[5110]: E0126 00:10:58.318397 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:30.318368039 +0000 UTC m=+147.547266658 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:58 crc kubenswrapper[5110]: E0126 00:10:58.318840 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:58 crc kubenswrapper[5110]: E0126 00:10:58.318873 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:58 crc kubenswrapper[5110]: E0126 00:10:58.318890 5110 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:58 crc kubenswrapper[5110]: E0126 00:10:58.318946 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:30.318928035 +0000 UTC m=+147.547826854 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:58 crc kubenswrapper[5110]: E0126 00:10:58.318956 5110 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:58 crc kubenswrapper[5110]: E0126 00:10:58.319067 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:30.319039138 +0000 UTC m=+147.547937787 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:58 crc kubenswrapper[5110]: E0126 00:10:58.319238 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:58 crc kubenswrapper[5110]: E0126 00:10:58.319258 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:58 crc kubenswrapper[5110]: E0126 00:10:58.319270 5110 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:58 crc kubenswrapper[5110]: E0126 00:10:58.319308 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:30.319296906 +0000 UTC m=+147.548195515 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:58 crc kubenswrapper[5110]: E0126 00:10:58.323865 5110 kubelet.go:2642] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.007s" Jan 26 00:10:58 crc kubenswrapper[5110]: I0126 00:10:58.324188 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:58 crc kubenswrapper[5110]: E0126 00:10:58.324398 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:58 crc kubenswrapper[5110]: I0126 00:10:58.324437 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8ndzr" Jan 26 00:10:58 crc kubenswrapper[5110]: E0126 00:10:58.324522 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8ndzr" podUID="040d3d5f-c02a-4a70-92af-70700fd9e3c3" Jan 26 00:10:58 crc kubenswrapper[5110]: I0126 00:10:58.324570 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:58 crc kubenswrapper[5110]: I0126 00:10:58.324630 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:58 crc kubenswrapper[5110]: E0126 00:10:58.324875 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:58 crc kubenswrapper[5110]: E0126 00:10:58.325067 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:58 crc kubenswrapper[5110]: I0126 00:10:58.326608 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:58 crc kubenswrapper[5110]: I0126 00:10:58.331618 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:10:58 crc kubenswrapper[5110]: I0126 00:10:58.359173 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" podUID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerName="ovnkube-controller" probeResult="failure" output="" Jan 26 00:10:58 crc kubenswrapper[5110]: I0126 00:10:58.418031 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/040d3d5f-c02a-4a70-92af-70700fd9e3c3-metrics-certs\") pod \"network-metrics-daemon-8ndzr\" (UID: \"040d3d5f-c02a-4a70-92af-70700fd9e3c3\") " pod="openshift-multus/network-metrics-daemon-8ndzr" Jan 26 00:10:58 crc kubenswrapper[5110]: E0126 00:10:58.418944 5110 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:10:58 crc kubenswrapper[5110]: E0126 00:10:58.419020 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/040d3d5f-c02a-4a70-92af-70700fd9e3c3-metrics-certs podName:040d3d5f-c02a-4a70-92af-70700fd9e3c3 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:30.418998512 +0000 UTC m=+147.647897121 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/040d3d5f-c02a-4a70-92af-70700fd9e3c3-metrics-certs") pod "network-metrics-daemon-8ndzr" (UID: "040d3d5f-c02a-4a70-92af-70700fd9e3c3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:10:59 crc kubenswrapper[5110]: I0126 00:10:59.324697 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-77v2r" event={"ID":"db003609-47aa-4a6a-a7ec-6dbc03ded29a","Type":"ContainerStarted","Data":"eb23520aaf8ce12baf8c022463dd70016f58efe514904efd45e82c2b3a9237d1"} Jan 26 00:10:59 crc kubenswrapper[5110]: I0126 00:10:59.340507 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" podUID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerName="ovnkube-controller" probeResult="failure" output="" Jan 26 00:11:00 crc kubenswrapper[5110]: I0126 00:11:00.317117 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8ndzr" Jan 26 00:11:00 crc kubenswrapper[5110]: I0126 00:11:00.317164 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:00 crc kubenswrapper[5110]: E0126 00:11:00.317317 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8ndzr" podUID="040d3d5f-c02a-4a70-92af-70700fd9e3c3" Jan 26 00:11:00 crc kubenswrapper[5110]: E0126 00:11:00.317450 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:00 crc kubenswrapper[5110]: I0126 00:11:00.317491 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:00 crc kubenswrapper[5110]: I0126 00:11:00.317553 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:00 crc kubenswrapper[5110]: E0126 00:11:00.317722 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:00 crc kubenswrapper[5110]: E0126 00:11:00.317922 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:01 crc kubenswrapper[5110]: I0126 00:11:01.337933 5110 generic.go:358] "Generic (PLEG): container finished" podID="db003609-47aa-4a6a-a7ec-6dbc03ded29a" containerID="eb23520aaf8ce12baf8c022463dd70016f58efe514904efd45e82c2b3a9237d1" exitCode=0 Jan 26 00:11:01 crc kubenswrapper[5110]: I0126 00:11:01.338026 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-77v2r" event={"ID":"db003609-47aa-4a6a-a7ec-6dbc03ded29a","Type":"ContainerDied","Data":"eb23520aaf8ce12baf8c022463dd70016f58efe514904efd45e82c2b3a9237d1"} Jan 26 00:11:02 crc kubenswrapper[5110]: I0126 00:11:02.316453 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:02 crc kubenswrapper[5110]: I0126 00:11:02.316528 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:02 crc kubenswrapper[5110]: E0126 00:11:02.316649 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:02 crc kubenswrapper[5110]: I0126 00:11:02.316469 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:02 crc kubenswrapper[5110]: E0126 00:11:02.316853 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:02 crc kubenswrapper[5110]: E0126 00:11:02.316956 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:02 crc kubenswrapper[5110]: I0126 00:11:02.317101 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8ndzr" Jan 26 00:11:02 crc kubenswrapper[5110]: E0126 00:11:02.317492 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8ndzr" podUID="040d3d5f-c02a-4a70-92af-70700fd9e3c3" Jan 26 00:11:03 crc kubenswrapper[5110]: E0126 00:11:03.266626 5110 kubelet_node_status.go:509] "Node not becoming ready in time after startup" Jan 26 00:11:03 crc kubenswrapper[5110]: E0126 00:11:03.474855 5110 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 00:11:04 crc kubenswrapper[5110]: I0126 00:11:04.316673 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:04 crc kubenswrapper[5110]: I0126 00:11:04.316741 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:04 crc kubenswrapper[5110]: I0126 00:11:04.316899 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:04 crc kubenswrapper[5110]: I0126 00:11:04.316908 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8ndzr" Jan 26 00:11:04 crc kubenswrapper[5110]: E0126 00:11:04.317163 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:04 crc kubenswrapper[5110]: E0126 00:11:04.317252 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:04 crc kubenswrapper[5110]: E0126 00:11:04.317383 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8ndzr" podUID="040d3d5f-c02a-4a70-92af-70700fd9e3c3" Jan 26 00:11:04 crc kubenswrapper[5110]: E0126 00:11:04.317501 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:06 crc kubenswrapper[5110]: I0126 00:11:06.316267 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:06 crc kubenswrapper[5110]: I0126 00:11:06.316339 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:06 crc kubenswrapper[5110]: E0126 00:11:06.317077 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:06 crc kubenswrapper[5110]: I0126 00:11:06.316409 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8ndzr" Jan 26 00:11:06 crc kubenswrapper[5110]: I0126 00:11:06.316387 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:06 crc kubenswrapper[5110]: E0126 00:11:06.317177 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:06 crc kubenswrapper[5110]: E0126 00:11:06.317195 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8ndzr" podUID="040d3d5f-c02a-4a70-92af-70700fd9e3c3" Jan 26 00:11:06 crc kubenswrapper[5110]: E0126 00:11:06.317524 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:06 crc kubenswrapper[5110]: I0126 00:11:06.870394 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:11:08 crc kubenswrapper[5110]: I0126 00:11:08.316392 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:08 crc kubenswrapper[5110]: I0126 00:11:08.316446 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8ndzr" Jan 26 00:11:08 crc kubenswrapper[5110]: I0126 00:11:08.316392 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:08 crc kubenswrapper[5110]: E0126 00:11:08.316605 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:08 crc kubenswrapper[5110]: E0126 00:11:08.316923 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:08 crc kubenswrapper[5110]: E0126 00:11:08.316985 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8ndzr" podUID="040d3d5f-c02a-4a70-92af-70700fd9e3c3" Jan 26 00:11:08 crc kubenswrapper[5110]: I0126 00:11:08.317361 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:08 crc kubenswrapper[5110]: E0126 00:11:08.317590 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:08 crc kubenswrapper[5110]: I0126 00:11:08.367025 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-77v2r" event={"ID":"db003609-47aa-4a6a-a7ec-6dbc03ded29a","Type":"ContainerStarted","Data":"61ecc6f098ba8d1c61ea5a49542cc8f94d3511552d4fc3acc055d95734b8fcaf"} Jan 26 00:11:08 crc kubenswrapper[5110]: I0126 00:11:08.394660 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-77v2r" podStartSLOduration=105.394635967 podStartE2EDuration="1m45.394635967s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:08.39331121 +0000 UTC m=+125.622209819" watchObservedRunningTime="2026-01-26 00:11:08.394635967 +0000 UTC m=+125.623534576" Jan 26 00:11:08 crc kubenswrapper[5110]: E0126 00:11:08.476615 5110 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 00:11:08 crc kubenswrapper[5110]: I0126 00:11:08.514547 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-8ndzr"] Jan 26 00:11:08 crc kubenswrapper[5110]: I0126 00:11:08.514705 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8ndzr" Jan 26 00:11:08 crc kubenswrapper[5110]: E0126 00:11:08.514856 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8ndzr" podUID="040d3d5f-c02a-4a70-92af-70700fd9e3c3" Jan 26 00:11:10 crc kubenswrapper[5110]: I0126 00:11:10.316845 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:10 crc kubenswrapper[5110]: E0126 00:11:10.317064 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:10 crc kubenswrapper[5110]: I0126 00:11:10.316845 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8ndzr" Jan 26 00:11:10 crc kubenswrapper[5110]: I0126 00:11:10.317129 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:10 crc kubenswrapper[5110]: E0126 00:11:10.317173 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8ndzr" podUID="040d3d5f-c02a-4a70-92af-70700fd9e3c3" Jan 26 00:11:10 crc kubenswrapper[5110]: E0126 00:11:10.317237 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:10 crc kubenswrapper[5110]: I0126 00:11:10.316845 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:10 crc kubenswrapper[5110]: E0126 00:11:10.317363 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:12 crc kubenswrapper[5110]: I0126 00:11:12.317013 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:12 crc kubenswrapper[5110]: I0126 00:11:12.317091 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:12 crc kubenswrapper[5110]: I0126 00:11:12.317013 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:12 crc kubenswrapper[5110]: I0126 00:11:12.317080 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8ndzr" Jan 26 00:11:12 crc kubenswrapper[5110]: E0126 00:11:12.317281 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:12 crc kubenswrapper[5110]: E0126 00:11:12.317473 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8ndzr" podUID="040d3d5f-c02a-4a70-92af-70700fd9e3c3" Jan 26 00:11:12 crc kubenswrapper[5110]: E0126 00:11:12.317578 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:12 crc kubenswrapper[5110]: E0126 00:11:12.317645 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:14 crc kubenswrapper[5110]: I0126 00:11:14.317162 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8ndzr" Jan 26 00:11:14 crc kubenswrapper[5110]: I0126 00:11:14.317250 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:14 crc kubenswrapper[5110]: I0126 00:11:14.317162 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:14 crc kubenswrapper[5110]: I0126 00:11:14.317733 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:14 crc kubenswrapper[5110]: I0126 00:11:14.322005 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 26 00:11:14 crc kubenswrapper[5110]: I0126 00:11:14.322143 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 26 00:11:14 crc kubenswrapper[5110]: I0126 00:11:14.322186 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 26 00:11:14 crc kubenswrapper[5110]: I0126 00:11:14.322945 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 26 00:11:14 crc kubenswrapper[5110]: I0126 00:11:14.323721 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 26 00:11:14 crc kubenswrapper[5110]: I0126 00:11:14.324455 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.742600 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.800236 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-qdwb2"] Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.822829 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-5jp8q"] Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.823104 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.847539 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.847620 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.848397 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.848427 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.848582 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.848658 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.849091 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.849193 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.849300 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.849678 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.858318 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-2gvm5"] Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.859174 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-5jp8q" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.863819 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-46ljf"] Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.869485 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.869551 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.873916 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-tjq6l"] Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.874976 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-2gvm5" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.877037 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.878425 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.882231 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.883519 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.883781 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.883861 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.883980 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.884269 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.897934 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-ngcjz"] Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.898423 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-46ljf" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.900578 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.900818 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.901008 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.903155 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-hsvjl"] Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.903372 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-tjq6l" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.903228 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.903415 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-ngcjz" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.908969 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.909422 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.909667 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.909934 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.909207 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.910352 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.909430 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.910735 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.910864 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.911174 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-tg86c"] Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.911383 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-hsvjl" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.915909 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.918355 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.918705 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.919061 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.919202 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.929670 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.929733 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.929817 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.930614 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.931077 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bfssk"] Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.931601 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.931646 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.936098 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.936521 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.937676 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.943409 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.943432 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.943759 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.943431 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.944926 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.944991 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.945510 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.945689 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.974389 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.974751 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.982711 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.983640 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/acf81ab8-6555-4503-b083-452fc4d249c1-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-2gvm5\" (UID: \"acf81ab8-6555-4503-b083-452fc4d249c1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-2gvm5" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.983675 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6314e04a-09b6-48f6-950c-271199f2f803-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-46ljf\" (UID: \"6314e04a-09b6-48f6-950c-271199f2f803\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-46ljf" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.983705 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8jcg\" (UniqueName: \"kubernetes.io/projected/0cc5795e-df15-48c8-949a-7254633f27e3-kube-api-access-r8jcg\") pod \"apiserver-9ddfb9f55-qdwb2\" (UID: \"0cc5795e-df15-48c8-949a-7254633f27e3\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.983724 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/125ce521-01f2-4198-b07e-e538e248c82f-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-ngcjz\" (UID: \"125ce521-01f2-4198-b07e-e538e248c82f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-ngcjz" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.983742 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ecc19229-da38-4ff6-bc7a-d5864b4d1101-audit-policies\") pod \"apiserver-8596bd845d-hsvjl\" (UID: \"ecc19229-da38-4ff6-bc7a-d5864b4d1101\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-hsvjl" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.983759 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/766da705-879b-4d57-a34d-43eca4c9da19-console-config\") pod \"console-64d44f6ddf-tjq6l\" (UID: \"766da705-879b-4d57-a34d-43eca4c9da19\") " pod="openshift-console/console-64d44f6ddf-tjq6l" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.983783 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/766da705-879b-4d57-a34d-43eca4c9da19-trusted-ca-bundle\") pod \"console-64d44f6ddf-tjq6l\" (UID: \"766da705-879b-4d57-a34d-43eca4c9da19\") " pod="openshift-console/console-64d44f6ddf-tjq6l" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.983821 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acf81ab8-6555-4503-b083-452fc4d249c1-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-2gvm5\" (UID: \"acf81ab8-6555-4503-b083-452fc4d249c1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-2gvm5" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.983842 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0cc5795e-df15-48c8-949a-7254633f27e3-serving-cert\") pod \"apiserver-9ddfb9f55-qdwb2\" (UID: \"0cc5795e-df15-48c8-949a-7254633f27e3\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.983858 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzpfn\" (UniqueName: \"kubernetes.io/projected/ecc19229-da38-4ff6-bc7a-d5864b4d1101-kube-api-access-nzpfn\") pod \"apiserver-8596bd845d-hsvjl\" (UID: \"ecc19229-da38-4ff6-bc7a-d5864b4d1101\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-hsvjl" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.983890 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6314e04a-09b6-48f6-950c-271199f2f803-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-46ljf\" (UID: \"6314e04a-09b6-48f6-950c-271199f2f803\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-46ljf" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.983909 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bad86cda-e9f6-44e8-84dd-966fc1a2434b-config\") pod \"kube-apiserver-operator-575994946d-5jp8q\" (UID: \"bad86cda-e9f6-44e8-84dd-966fc1a2434b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-5jp8q" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.983929 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ecc19229-da38-4ff6-bc7a-d5864b4d1101-audit-dir\") pod \"apiserver-8596bd845d-hsvjl\" (UID: \"ecc19229-da38-4ff6-bc7a-d5864b4d1101\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-hsvjl" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.984078 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0cc5795e-df15-48c8-949a-7254633f27e3-audit\") pod \"apiserver-9ddfb9f55-qdwb2\" (UID: \"0cc5795e-df15-48c8-949a-7254633f27e3\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.984102 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acf81ab8-6555-4503-b083-452fc4d249c1-config\") pod \"openshift-controller-manager-operator-686468bdd5-2gvm5\" (UID: \"acf81ab8-6555-4503-b083-452fc4d249c1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-2gvm5" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.984119 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0cc5795e-df15-48c8-949a-7254633f27e3-etcd-client\") pod \"apiserver-9ddfb9f55-qdwb2\" (UID: \"0cc5795e-df15-48c8-949a-7254633f27e3\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.984152 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6314e04a-09b6-48f6-950c-271199f2f803-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-46ljf\" (UID: \"6314e04a-09b6-48f6-950c-271199f2f803\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-46ljf" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.984172 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ecc19229-da38-4ff6-bc7a-d5864b4d1101-serving-cert\") pod \"apiserver-8596bd845d-hsvjl\" (UID: \"ecc19229-da38-4ff6-bc7a-d5864b4d1101\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-hsvjl" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.984206 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmxm8\" (UniqueName: \"kubernetes.io/projected/766da705-879b-4d57-a34d-43eca4c9da19-kube-api-access-rmxm8\") pod \"console-64d44f6ddf-tjq6l\" (UID: \"766da705-879b-4d57-a34d-43eca4c9da19\") " pod="openshift-console/console-64d44f6ddf-tjq6l" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.984231 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/125ce521-01f2-4198-b07e-e538e248c82f-config\") pod \"kube-controller-manager-operator-69d5f845f8-ngcjz\" (UID: \"125ce521-01f2-4198-b07e-e538e248c82f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-ngcjz" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.984249 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/766da705-879b-4d57-a34d-43eca4c9da19-console-oauth-config\") pod \"console-64d44f6ddf-tjq6l\" (UID: \"766da705-879b-4d57-a34d-43eca4c9da19\") " pod="openshift-console/console-64d44f6ddf-tjq6l" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.984283 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.984323 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/766da705-879b-4d57-a34d-43eca4c9da19-oauth-serving-cert\") pod \"console-64d44f6ddf-tjq6l\" (UID: \"766da705-879b-4d57-a34d-43eca4c9da19\") " pod="openshift-console/console-64d44f6ddf-tjq6l" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.984392 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/125ce521-01f2-4198-b07e-e538e248c82f-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-ngcjz\" (UID: \"125ce521-01f2-4198-b07e-e538e248c82f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-ngcjz" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.984465 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgkzt\" (UniqueName: \"kubernetes.io/projected/acf81ab8-6555-4503-b083-452fc4d249c1-kube-api-access-xgkzt\") pod \"openshift-controller-manager-operator-686468bdd5-2gvm5\" (UID: \"acf81ab8-6555-4503-b083-452fc4d249c1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-2gvm5" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.984472 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.984490 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0cc5795e-df15-48c8-949a-7254633f27e3-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-qdwb2\" (UID: \"0cc5795e-df15-48c8-949a-7254633f27e3\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.984520 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/766da705-879b-4d57-a34d-43eca4c9da19-console-serving-cert\") pod \"console-64d44f6ddf-tjq6l\" (UID: \"766da705-879b-4d57-a34d-43eca4c9da19\") " pod="openshift-console/console-64d44f6ddf-tjq6l" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.984542 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0cc5795e-df15-48c8-949a-7254633f27e3-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-qdwb2\" (UID: \"0cc5795e-df15-48c8-949a-7254633f27e3\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.984561 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0cc5795e-df15-48c8-949a-7254633f27e3-audit-dir\") pod \"apiserver-9ddfb9f55-qdwb2\" (UID: \"0cc5795e-df15-48c8-949a-7254633f27e3\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.984579 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bad86cda-e9f6-44e8-84dd-966fc1a2434b-serving-cert\") pod \"kube-apiserver-operator-575994946d-5jp8q\" (UID: \"bad86cda-e9f6-44e8-84dd-966fc1a2434b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-5jp8q" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.984596 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/bad86cda-e9f6-44e8-84dd-966fc1a2434b-tmp-dir\") pod \"kube-apiserver-operator-575994946d-5jp8q\" (UID: \"bad86cda-e9f6-44e8-84dd-966fc1a2434b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-5jp8q" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.984617 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0cc5795e-df15-48c8-949a-7254633f27e3-encryption-config\") pod \"apiserver-9ddfb9f55-qdwb2\" (UID: \"0cc5795e-df15-48c8-949a-7254633f27e3\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.984635 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0cc5795e-df15-48c8-949a-7254633f27e3-node-pullsecrets\") pod \"apiserver-9ddfb9f55-qdwb2\" (UID: \"0cc5795e-df15-48c8-949a-7254633f27e3\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.984663 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ecc19229-da38-4ff6-bc7a-d5864b4d1101-etcd-client\") pod \"apiserver-8596bd845d-hsvjl\" (UID: \"ecc19229-da38-4ff6-bc7a-d5864b4d1101\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-hsvjl" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.984679 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/125ce521-01f2-4198-b07e-e538e248c82f-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-ngcjz\" (UID: \"125ce521-01f2-4198-b07e-e538e248c82f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-ngcjz" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.984693 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ecc19229-da38-4ff6-bc7a-d5864b4d1101-etcd-serving-ca\") pod \"apiserver-8596bd845d-hsvjl\" (UID: \"ecc19229-da38-4ff6-bc7a-d5864b4d1101\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-hsvjl" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.984720 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt6kh\" (UniqueName: \"kubernetes.io/projected/6314e04a-09b6-48f6-950c-271199f2f803-kube-api-access-pt6kh\") pod \"ingress-operator-6b9cb4dbcf-46ljf\" (UID: \"6314e04a-09b6-48f6-950c-271199f2f803\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-46ljf" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.984737 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/766da705-879b-4d57-a34d-43eca4c9da19-service-ca\") pod \"console-64d44f6ddf-tjq6l\" (UID: \"766da705-879b-4d57-a34d-43eca4c9da19\") " pod="openshift-console/console-64d44f6ddf-tjq6l" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.984751 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bad86cda-e9f6-44e8-84dd-966fc1a2434b-kube-api-access\") pod \"kube-apiserver-operator-575994946d-5jp8q\" (UID: \"bad86cda-e9f6-44e8-84dd-966fc1a2434b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-5jp8q" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.984784 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cc5795e-df15-48c8-949a-7254633f27e3-config\") pod \"apiserver-9ddfb9f55-qdwb2\" (UID: \"0cc5795e-df15-48c8-949a-7254633f27e3\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.984818 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecc19229-da38-4ff6-bc7a-d5864b4d1101-trusted-ca-bundle\") pod \"apiserver-8596bd845d-hsvjl\" (UID: \"ecc19229-da38-4ff6-bc7a-d5864b4d1101\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-hsvjl" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.984929 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0cc5795e-df15-48c8-949a-7254633f27e3-image-import-ca\") pod \"apiserver-9ddfb9f55-qdwb2\" (UID: \"0cc5795e-df15-48c8-949a-7254633f27e3\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.984950 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ecc19229-da38-4ff6-bc7a-d5864b4d1101-encryption-config\") pod \"apiserver-8596bd845d-hsvjl\" (UID: \"ecc19229-da38-4ff6-bc7a-d5864b4d1101\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-hsvjl" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.985760 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.985980 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.991121 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-7plfs"] Jan 26 00:11:22 crc kubenswrapper[5110]: I0126 00:11:22.991318 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bfssk" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.003839 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-brrzw"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.004145 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-7plfs" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.008924 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.009207 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.009412 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.009639 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.010078 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.010229 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.010420 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.010763 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.010935 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.011080 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.011220 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.011353 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.019749 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-77449"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.020749 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.031127 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.031337 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.032902 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.057482 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-8ssgr"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.058081 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-77449" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.059154 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.063412 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.068854 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.076227 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-cg4nl"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.076434 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-8ssgr" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.080926 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.084965 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.085366 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.085410 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.087891 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.088187 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rmxm8\" (UniqueName: \"kubernetes.io/projected/766da705-879b-4d57-a34d-43eca4c9da19-kube-api-access-rmxm8\") pod \"console-64d44f6ddf-tjq6l\" (UID: \"766da705-879b-4d57-a34d-43eca4c9da19\") " pod="openshift-console/console-64d44f6ddf-tjq6l" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.088226 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/125ce521-01f2-4198-b07e-e538e248c82f-config\") pod \"kube-controller-manager-operator-69d5f845f8-ngcjz\" (UID: \"125ce521-01f2-4198-b07e-e538e248c82f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-ngcjz" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.088231 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-rwwx4"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.088249 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/766da705-879b-4d57-a34d-43eca4c9da19-console-oauth-config\") pod \"console-64d44f6ddf-tjq6l\" (UID: \"766da705-879b-4d57-a34d-43eca4c9da19\") " pod="openshift-console/console-64d44f6ddf-tjq6l" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.088271 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/766da705-879b-4d57-a34d-43eca4c9da19-oauth-serving-cert\") pod \"console-64d44f6ddf-tjq6l\" (UID: \"766da705-879b-4d57-a34d-43eca4c9da19\") " pod="openshift-console/console-64d44f6ddf-tjq6l" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.088299 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.088327 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6bf41d48-487f-4123-9d83-bef3d1efaa68-etcd-ca\") pod \"etcd-operator-69b85846b6-7plfs\" (UID: \"6bf41d48-487f-4123-9d83-bef3d1efaa68\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7plfs" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.088349 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.088373 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhsv6\" (UniqueName: \"kubernetes.io/projected/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-kube-api-access-fhsv6\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.088399 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.088473 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bf41d48-487f-4123-9d83-bef3d1efaa68-config\") pod \"etcd-operator-69b85846b6-7plfs\" (UID: \"6bf41d48-487f-4123-9d83-bef3d1efaa68\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7plfs" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.088494 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/125ce521-01f2-4198-b07e-e538e248c82f-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-ngcjz\" (UID: \"125ce521-01f2-4198-b07e-e538e248c82f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-ngcjz" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.088530 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6bf41d48-487f-4123-9d83-bef3d1efaa68-serving-cert\") pod \"etcd-operator-69b85846b6-7plfs\" (UID: \"6bf41d48-487f-4123-9d83-bef3d1efaa68\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7plfs" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.088547 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.088564 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/be1e2506-51b5-4287-a99d-37df39a44ff7-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-bfssk\" (UID: \"be1e2506-51b5-4287-a99d-37df39a44ff7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bfssk" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.088597 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xgkzt\" (UniqueName: \"kubernetes.io/projected/acf81ab8-6555-4503-b083-452fc4d249c1-kube-api-access-xgkzt\") pod \"openshift-controller-manager-operator-686468bdd5-2gvm5\" (UID: \"acf81ab8-6555-4503-b083-452fc4d249c1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-2gvm5" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.088619 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0cc5795e-df15-48c8-949a-7254633f27e3-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-qdwb2\" (UID: \"0cc5795e-df15-48c8-949a-7254633f27e3\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.088654 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/766da705-879b-4d57-a34d-43eca4c9da19-console-serving-cert\") pod \"console-64d44f6ddf-tjq6l\" (UID: \"766da705-879b-4d57-a34d-43eca4c9da19\") " pod="openshift-console/console-64d44f6ddf-tjq6l" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.088677 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.088703 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0cc5795e-df15-48c8-949a-7254633f27e3-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-qdwb2\" (UID: \"0cc5795e-df15-48c8-949a-7254633f27e3\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.088723 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0cc5795e-df15-48c8-949a-7254633f27e3-audit-dir\") pod \"apiserver-9ddfb9f55-qdwb2\" (UID: \"0cc5795e-df15-48c8-949a-7254633f27e3\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.088744 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bad86cda-e9f6-44e8-84dd-966fc1a2434b-serving-cert\") pod \"kube-apiserver-operator-575994946d-5jp8q\" (UID: \"bad86cda-e9f6-44e8-84dd-966fc1a2434b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-5jp8q" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.088768 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/bad86cda-e9f6-44e8-84dd-966fc1a2434b-tmp-dir\") pod \"kube-apiserver-operator-575994946d-5jp8q\" (UID: \"bad86cda-e9f6-44e8-84dd-966fc1a2434b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-5jp8q" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.088794 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0cc5795e-df15-48c8-949a-7254633f27e3-encryption-config\") pod \"apiserver-9ddfb9f55-qdwb2\" (UID: \"0cc5795e-df15-48c8-949a-7254633f27e3\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.088833 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-audit-policies\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.088861 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0cc5795e-df15-48c8-949a-7254633f27e3-node-pullsecrets\") pod \"apiserver-9ddfb9f55-qdwb2\" (UID: \"0cc5795e-df15-48c8-949a-7254633f27e3\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.088911 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.088935 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6bf41d48-487f-4123-9d83-bef3d1efaa68-etcd-service-ca\") pod \"etcd-operator-69b85846b6-7plfs\" (UID: \"6bf41d48-487f-4123-9d83-bef3d1efaa68\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7plfs" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.088964 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ecc19229-da38-4ff6-bc7a-d5864b4d1101-etcd-client\") pod \"apiserver-8596bd845d-hsvjl\" (UID: \"ecc19229-da38-4ff6-bc7a-d5864b4d1101\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-hsvjl" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.088979 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6bf41d48-487f-4123-9d83-bef3d1efaa68-tmp-dir\") pod \"etcd-operator-69b85846b6-7plfs\" (UID: \"6bf41d48-487f-4123-9d83-bef3d1efaa68\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7plfs" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.088997 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/125ce521-01f2-4198-b07e-e538e248c82f-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-ngcjz\" (UID: \"125ce521-01f2-4198-b07e-e538e248c82f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-ngcjz" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.089015 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ecc19229-da38-4ff6-bc7a-d5864b4d1101-etcd-serving-ca\") pod \"apiserver-8596bd845d-hsvjl\" (UID: \"ecc19229-da38-4ff6-bc7a-d5864b4d1101\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-hsvjl" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.089034 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pt6kh\" (UniqueName: \"kubernetes.io/projected/6314e04a-09b6-48f6-950c-271199f2f803-kube-api-access-pt6kh\") pod \"ingress-operator-6b9cb4dbcf-46ljf\" (UID: \"6314e04a-09b6-48f6-950c-271199f2f803\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-46ljf" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.089053 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/766da705-879b-4d57-a34d-43eca4c9da19-service-ca\") pod \"console-64d44f6ddf-tjq6l\" (UID: \"766da705-879b-4d57-a34d-43eca4c9da19\") " pod="openshift-console/console-64d44f6ddf-tjq6l" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.089129 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bad86cda-e9f6-44e8-84dd-966fc1a2434b-kube-api-access\") pod \"kube-apiserver-operator-575994946d-5jp8q\" (UID: \"bad86cda-e9f6-44e8-84dd-966fc1a2434b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-5jp8q" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.089151 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6bf41d48-487f-4123-9d83-bef3d1efaa68-etcd-client\") pod \"etcd-operator-69b85846b6-7plfs\" (UID: \"6bf41d48-487f-4123-9d83-bef3d1efaa68\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7plfs" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.089172 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cc5795e-df15-48c8-949a-7254633f27e3-config\") pod \"apiserver-9ddfb9f55-qdwb2\" (UID: \"0cc5795e-df15-48c8-949a-7254633f27e3\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.089196 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecc19229-da38-4ff6-bc7a-d5864b4d1101-trusted-ca-bundle\") pod \"apiserver-8596bd845d-hsvjl\" (UID: \"ecc19229-da38-4ff6-bc7a-d5864b4d1101\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-hsvjl" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.089218 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.089245 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0cc5795e-df15-48c8-949a-7254633f27e3-image-import-ca\") pod \"apiserver-9ddfb9f55-qdwb2\" (UID: \"0cc5795e-df15-48c8-949a-7254633f27e3\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.089246 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/125ce521-01f2-4198-b07e-e538e248c82f-config\") pod \"kube-controller-manager-operator-69d5f845f8-ngcjz\" (UID: \"125ce521-01f2-4198-b07e-e538e248c82f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-ngcjz" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.089263 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ecc19229-da38-4ff6-bc7a-d5864b4d1101-encryption-config\") pod \"apiserver-8596bd845d-hsvjl\" (UID: \"ecc19229-da38-4ff6-bc7a-d5864b4d1101\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-hsvjl" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.089284 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp8fj\" (UniqueName: \"kubernetes.io/projected/6bf41d48-487f-4123-9d83-bef3d1efaa68-kube-api-access-gp8fj\") pod \"etcd-operator-69b85846b6-7plfs\" (UID: \"6bf41d48-487f-4123-9d83-bef3d1efaa68\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7plfs" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.088203 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.089709 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0cc5795e-df15-48c8-949a-7254633f27e3-node-pullsecrets\") pod \"apiserver-9ddfb9f55-qdwb2\" (UID: \"0cc5795e-df15-48c8-949a-7254633f27e3\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.089914 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/acf81ab8-6555-4503-b083-452fc4d249c1-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-2gvm5\" (UID: \"acf81ab8-6555-4503-b083-452fc4d249c1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-2gvm5" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.091836 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6314e04a-09b6-48f6-950c-271199f2f803-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-46ljf\" (UID: \"6314e04a-09b6-48f6-950c-271199f2f803\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-46ljf" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.091881 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r8jcg\" (UniqueName: \"kubernetes.io/projected/0cc5795e-df15-48c8-949a-7254633f27e3-kube-api-access-r8jcg\") pod \"apiserver-9ddfb9f55-qdwb2\" (UID: \"0cc5795e-df15-48c8-949a-7254633f27e3\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.091917 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/125ce521-01f2-4198-b07e-e538e248c82f-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-ngcjz\" (UID: \"125ce521-01f2-4198-b07e-e538e248c82f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-ngcjz" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.091940 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ecc19229-da38-4ff6-bc7a-d5864b4d1101-audit-policies\") pod \"apiserver-8596bd845d-hsvjl\" (UID: \"ecc19229-da38-4ff6-bc7a-d5864b4d1101\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-hsvjl" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.091963 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/766da705-879b-4d57-a34d-43eca4c9da19-console-config\") pod \"console-64d44f6ddf-tjq6l\" (UID: \"766da705-879b-4d57-a34d-43eca4c9da19\") " pod="openshift-console/console-64d44f6ddf-tjq6l" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.092008 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/766da705-879b-4d57-a34d-43eca4c9da19-trusted-ca-bundle\") pod \"console-64d44f6ddf-tjq6l\" (UID: \"766da705-879b-4d57-a34d-43eca4c9da19\") " pod="openshift-console/console-64d44f6ddf-tjq6l" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.092036 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acf81ab8-6555-4503-b083-452fc4d249c1-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-2gvm5\" (UID: \"acf81ab8-6555-4503-b083-452fc4d249c1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-2gvm5" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.092073 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.092098 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0cc5795e-df15-48c8-949a-7254633f27e3-serving-cert\") pod \"apiserver-9ddfb9f55-qdwb2\" (UID: \"0cc5795e-df15-48c8-949a-7254633f27e3\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.092119 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nzpfn\" (UniqueName: \"kubernetes.io/projected/ecc19229-da38-4ff6-bc7a-d5864b4d1101-kube-api-access-nzpfn\") pod \"apiserver-8596bd845d-hsvjl\" (UID: \"ecc19229-da38-4ff6-bc7a-d5864b4d1101\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-hsvjl" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.092217 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.092244 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.092275 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6314e04a-09b6-48f6-950c-271199f2f803-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-46ljf\" (UID: \"6314e04a-09b6-48f6-950c-271199f2f803\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-46ljf" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.092298 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bad86cda-e9f6-44e8-84dd-966fc1a2434b-config\") pod \"kube-apiserver-operator-575994946d-5jp8q\" (UID: \"bad86cda-e9f6-44e8-84dd-966fc1a2434b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-5jp8q" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.092333 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ecc19229-da38-4ff6-bc7a-d5864b4d1101-audit-dir\") pod \"apiserver-8596bd845d-hsvjl\" (UID: \"ecc19229-da38-4ff6-bc7a-d5864b4d1101\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-hsvjl" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.092363 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0cc5795e-df15-48c8-949a-7254633f27e3-audit\") pod \"apiserver-9ddfb9f55-qdwb2\" (UID: \"0cc5795e-df15-48c8-949a-7254633f27e3\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.092384 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acf81ab8-6555-4503-b083-452fc4d249c1-config\") pod \"openshift-controller-manager-operator-686468bdd5-2gvm5\" (UID: \"acf81ab8-6555-4503-b083-452fc4d249c1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-2gvm5" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.092403 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.092422 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbt2w\" (UniqueName: \"kubernetes.io/projected/be1e2506-51b5-4287-a99d-37df39a44ff7-kube-api-access-gbt2w\") pod \"cluster-samples-operator-6b564684c8-bfssk\" (UID: \"be1e2506-51b5-4287-a99d-37df39a44ff7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bfssk" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.092440 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0cc5795e-df15-48c8-949a-7254633f27e3-etcd-client\") pod \"apiserver-9ddfb9f55-qdwb2\" (UID: \"0cc5795e-df15-48c8-949a-7254633f27e3\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.093684 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.095669 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cc5795e-df15-48c8-949a-7254633f27e3-config\") pod \"apiserver-9ddfb9f55-qdwb2\" (UID: \"0cc5795e-df15-48c8-949a-7254633f27e3\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.097319 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0cc5795e-df15-48c8-949a-7254633f27e3-encryption-config\") pod \"apiserver-9ddfb9f55-qdwb2\" (UID: \"0cc5795e-df15-48c8-949a-7254633f27e3\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.099454 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ecc19229-da38-4ff6-bc7a-d5864b4d1101-etcd-client\") pod \"apiserver-8596bd845d-hsvjl\" (UID: \"ecc19229-da38-4ff6-bc7a-d5864b4d1101\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-hsvjl" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.100017 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/bad86cda-e9f6-44e8-84dd-966fc1a2434b-tmp-dir\") pod \"kube-apiserver-operator-575994946d-5jp8q\" (UID: \"bad86cda-e9f6-44e8-84dd-966fc1a2434b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-5jp8q" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.101713 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/766da705-879b-4d57-a34d-43eca4c9da19-console-oauth-config\") pod \"console-64d44f6ddf-tjq6l\" (UID: \"766da705-879b-4d57-a34d-43eca4c9da19\") " pod="openshift-console/console-64d44f6ddf-tjq6l" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.104400 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/125ce521-01f2-4198-b07e-e538e248c82f-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-ngcjz\" (UID: \"125ce521-01f2-4198-b07e-e538e248c82f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-ngcjz" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.105570 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/766da705-879b-4d57-a34d-43eca4c9da19-service-ca\") pod \"console-64d44f6ddf-tjq6l\" (UID: \"766da705-879b-4d57-a34d-43eca4c9da19\") " pod="openshift-console/console-64d44f6ddf-tjq6l" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.105683 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0cc5795e-df15-48c8-949a-7254633f27e3-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-qdwb2\" (UID: \"0cc5795e-df15-48c8-949a-7254633f27e3\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.105932 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/acf81ab8-6555-4503-b083-452fc4d249c1-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-2gvm5\" (UID: \"acf81ab8-6555-4503-b083-452fc4d249c1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-2gvm5" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.106516 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/766da705-879b-4d57-a34d-43eca4c9da19-oauth-serving-cert\") pod \"console-64d44f6ddf-tjq6l\" (UID: \"766da705-879b-4d57-a34d-43eca4c9da19\") " pod="openshift-console/console-64d44f6ddf-tjq6l" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.107416 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acf81ab8-6555-4503-b083-452fc4d249c1-config\") pod \"openshift-controller-manager-operator-686468bdd5-2gvm5\" (UID: \"acf81ab8-6555-4503-b083-452fc4d249c1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-2gvm5" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.107519 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ecc19229-da38-4ff6-bc7a-d5864b4d1101-audit-dir\") pod \"apiserver-8596bd845d-hsvjl\" (UID: \"ecc19229-da38-4ff6-bc7a-d5864b4d1101\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-hsvjl" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.107644 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bad86cda-e9f6-44e8-84dd-966fc1a2434b-config\") pod \"kube-apiserver-operator-575994946d-5jp8q\" (UID: \"bad86cda-e9f6-44e8-84dd-966fc1a2434b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-5jp8q" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.107643 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0cc5795e-df15-48c8-949a-7254633f27e3-audit-dir\") pod \"apiserver-9ddfb9f55-qdwb2\" (UID: \"0cc5795e-df15-48c8-949a-7254633f27e3\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.108670 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/766da705-879b-4d57-a34d-43eca4c9da19-console-config\") pod \"console-64d44f6ddf-tjq6l\" (UID: \"766da705-879b-4d57-a34d-43eca4c9da19\") " pod="openshift-console/console-64d44f6ddf-tjq6l" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.109273 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/125ce521-01f2-4198-b07e-e538e248c82f-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-ngcjz\" (UID: \"125ce521-01f2-4198-b07e-e538e248c82f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-ngcjz" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.109304 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/766da705-879b-4d57-a34d-43eca4c9da19-trusted-ca-bundle\") pod \"console-64d44f6ddf-tjq6l\" (UID: \"766da705-879b-4d57-a34d-43eca4c9da19\") " pod="openshift-console/console-64d44f6ddf-tjq6l" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.107421 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6314e04a-09b6-48f6-950c-271199f2f803-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-46ljf\" (UID: \"6314e04a-09b6-48f6-950c-271199f2f803\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-46ljf" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.109947 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ecc19229-da38-4ff6-bc7a-d5864b4d1101-serving-cert\") pod \"apiserver-8596bd845d-hsvjl\" (UID: \"ecc19229-da38-4ff6-bc7a-d5864b4d1101\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-hsvjl" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.109888 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-cg4nl" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.109662 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-gk4w9"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.110722 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bad86cda-e9f6-44e8-84dd-966fc1a2434b-serving-cert\") pod \"kube-apiserver-operator-575994946d-5jp8q\" (UID: \"bad86cda-e9f6-44e8-84dd-966fc1a2434b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-5jp8q" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.110105 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-audit-dir\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.109997 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rwwx4" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.111083 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/125ce521-01f2-4198-b07e-e538e248c82f-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-ngcjz\" (UID: \"125ce521-01f2-4198-b07e-e538e248c82f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-ngcjz" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.112763 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecc19229-da38-4ff6-bc7a-d5864b4d1101-trusted-ca-bundle\") pod \"apiserver-8596bd845d-hsvjl\" (UID: \"ecc19229-da38-4ff6-bc7a-d5864b4d1101\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-hsvjl" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.116144 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6314e04a-09b6-48f6-950c-271199f2f803-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-46ljf\" (UID: \"6314e04a-09b6-48f6-950c-271199f2f803\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-46ljf" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.116903 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acf81ab8-6555-4503-b083-452fc4d249c1-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-2gvm5\" (UID: \"acf81ab8-6555-4503-b083-452fc4d249c1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-2gvm5" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.117226 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.119306 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmxm8\" (UniqueName: \"kubernetes.io/projected/766da705-879b-4d57-a34d-43eca4c9da19-kube-api-access-rmxm8\") pod \"console-64d44f6ddf-tjq6l\" (UID: \"766da705-879b-4d57-a34d-43eca4c9da19\") " pod="openshift-console/console-64d44f6ddf-tjq6l" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.119514 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.120642 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/766da705-879b-4d57-a34d-43eca4c9da19-console-serving-cert\") pod \"console-64d44f6ddf-tjq6l\" (UID: \"766da705-879b-4d57-a34d-43eca4c9da19\") " pod="openshift-console/console-64d44f6ddf-tjq6l" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.120922 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgkzt\" (UniqueName: \"kubernetes.io/projected/acf81ab8-6555-4503-b083-452fc4d249c1-kube-api-access-xgkzt\") pod \"openshift-controller-manager-operator-686468bdd5-2gvm5\" (UID: \"acf81ab8-6555-4503-b083-452fc4d249c1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-2gvm5" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.121651 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzpfn\" (UniqueName: \"kubernetes.io/projected/ecc19229-da38-4ff6-bc7a-d5864b4d1101-kube-api-access-nzpfn\") pod \"apiserver-8596bd845d-hsvjl\" (UID: \"ecc19229-da38-4ff6-bc7a-d5864b4d1101\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-hsvjl" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.121792 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.122011 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.122168 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.122462 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.124190 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ecc19229-da38-4ff6-bc7a-d5864b4d1101-serving-cert\") pod \"apiserver-8596bd845d-hsvjl\" (UID: \"ecc19229-da38-4ff6-bc7a-d5864b4d1101\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-hsvjl" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.124300 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bad86cda-e9f6-44e8-84dd-966fc1a2434b-kube-api-access\") pod \"kube-apiserver-operator-575994946d-5jp8q\" (UID: \"bad86cda-e9f6-44e8-84dd-966fc1a2434b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-5jp8q" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.125486 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.125811 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ecc19229-da38-4ff6-bc7a-d5864b4d1101-audit-policies\") pod \"apiserver-8596bd845d-hsvjl\" (UID: \"ecc19229-da38-4ff6-bc7a-d5864b4d1101\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-hsvjl" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.127126 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ecc19229-da38-4ff6-bc7a-d5864b4d1101-encryption-config\") pod \"apiserver-8596bd845d-hsvjl\" (UID: \"ecc19229-da38-4ff6-bc7a-d5864b4d1101\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-hsvjl" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.127242 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0cc5795e-df15-48c8-949a-7254633f27e3-serving-cert\") pod \"apiserver-9ddfb9f55-qdwb2\" (UID: \"0cc5795e-df15-48c8-949a-7254633f27e3\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.127328 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6314e04a-09b6-48f6-950c-271199f2f803-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-46ljf\" (UID: \"6314e04a-09b6-48f6-950c-271199f2f803\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-46ljf" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.127499 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ecc19229-da38-4ff6-bc7a-d5864b4d1101-etcd-serving-ca\") pod \"apiserver-8596bd845d-hsvjl\" (UID: \"ecc19229-da38-4ff6-bc7a-d5864b4d1101\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-hsvjl" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.127581 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0cc5795e-df15-48c8-949a-7254633f27e3-image-import-ca\") pod \"apiserver-9ddfb9f55-qdwb2\" (UID: \"0cc5795e-df15-48c8-949a-7254633f27e3\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.127708 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0cc5795e-df15-48c8-949a-7254633f27e3-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-qdwb2\" (UID: \"0cc5795e-df15-48c8-949a-7254633f27e3\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.128352 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6314e04a-09b6-48f6-950c-271199f2f803-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-46ljf\" (UID: \"6314e04a-09b6-48f6-950c-271199f2f803\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-46ljf" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.130170 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8jcg\" (UniqueName: \"kubernetes.io/projected/0cc5795e-df15-48c8-949a-7254633f27e3-kube-api-access-r8jcg\") pod \"apiserver-9ddfb9f55-qdwb2\" (UID: \"0cc5795e-df15-48c8-949a-7254633f27e3\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.130339 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt6kh\" (UniqueName: \"kubernetes.io/projected/6314e04a-09b6-48f6-950c-271199f2f803-kube-api-access-pt6kh\") pod \"ingress-operator-6b9cb4dbcf-46ljf\" (UID: \"6314e04a-09b6-48f6-950c-271199f2f803\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-46ljf" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.130748 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0cc5795e-df15-48c8-949a-7254633f27e3-etcd-client\") pod \"apiserver-9ddfb9f55-qdwb2\" (UID: \"0cc5795e-df15-48c8-949a-7254633f27e3\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.130998 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0cc5795e-df15-48c8-949a-7254633f27e3-audit\") pod \"apiserver-9ddfb9f55-qdwb2\" (UID: \"0cc5795e-df15-48c8-949a-7254633f27e3\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.140991 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-c5l6l"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.141195 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-gk4w9" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.145823 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.159516 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-t49fp"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.160097 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-c5l6l" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.165686 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-67svm"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.166674 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t49fp" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.167338 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.171445 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-crnjv"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.171944 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-67svm" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.176297 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29489760-jjbnv"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.176466 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-crnjv" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.181545 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-5jp8q" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.181596 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.199403 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-2gvm5" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.202647 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-tgtdw"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.202866 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29489760-jjbnv" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.206525 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.211732 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb-client-ca\") pod \"controller-manager-65b6cccf98-8ssgr\" (UID: \"9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8ssgr" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.211812 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6bf41d48-487f-4123-9d83-bef3d1efaa68-serving-cert\") pod \"etcd-operator-69b85846b6-7plfs\" (UID: \"6bf41d48-487f-4123-9d83-bef3d1efaa68\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7plfs" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.211921 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.211979 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/be1e2506-51b5-4287-a99d-37df39a44ff7-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-bfssk\" (UID: \"be1e2506-51b5-4287-a99d-37df39a44ff7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bfssk" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.212030 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-8ssgr\" (UID: \"9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8ssgr" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.212074 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.212114 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-audit-policies\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.212135 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb-config\") pod \"controller-manager-65b6cccf98-8ssgr\" (UID: \"9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8ssgr" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.212160 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.212179 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6bf41d48-487f-4123-9d83-bef3d1efaa68-etcd-service-ca\") pod \"etcd-operator-69b85846b6-7plfs\" (UID: \"6bf41d48-487f-4123-9d83-bef3d1efaa68\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7plfs" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.212212 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6bf41d48-487f-4123-9d83-bef3d1efaa68-tmp-dir\") pod \"etcd-operator-69b85846b6-7plfs\" (UID: \"6bf41d48-487f-4123-9d83-bef3d1efaa68\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7plfs" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.212239 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/aaaae9da-7fe6-47c9-8a79-20343d5f6c43-tmp-dir\") pod \"dns-operator-799b87ffcd-cg4nl\" (UID: \"aaaae9da-7fe6-47c9-8a79-20343d5f6c43\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-cg4nl" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.212275 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6bf41d48-487f-4123-9d83-bef3d1efaa68-etcd-client\") pod \"etcd-operator-69b85846b6-7plfs\" (UID: \"6bf41d48-487f-4123-9d83-bef3d1efaa68\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7plfs" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.212305 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.212325 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5n6n\" (UniqueName: \"kubernetes.io/projected/aaaae9da-7fe6-47c9-8a79-20343d5f6c43-kube-api-access-r5n6n\") pod \"dns-operator-799b87ffcd-cg4nl\" (UID: \"aaaae9da-7fe6-47c9-8a79-20343d5f6c43\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-cg4nl" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.212360 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gp8fj\" (UniqueName: \"kubernetes.io/projected/6bf41d48-487f-4123-9d83-bef3d1efaa68-kube-api-access-gp8fj\") pod \"etcd-operator-69b85846b6-7plfs\" (UID: \"6bf41d48-487f-4123-9d83-bef3d1efaa68\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7plfs" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.212382 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb-serving-cert\") pod \"controller-manager-65b6cccf98-8ssgr\" (UID: \"9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8ssgr" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.212458 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb-tmp\") pod \"controller-manager-65b6cccf98-8ssgr\" (UID: \"9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8ssgr" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.212486 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.212514 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.212536 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.212595 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.212615 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gbt2w\" (UniqueName: \"kubernetes.io/projected/be1e2506-51b5-4287-a99d-37df39a44ff7-kube-api-access-gbt2w\") pod \"cluster-samples-operator-6b564684c8-bfssk\" (UID: \"be1e2506-51b5-4287-a99d-37df39a44ff7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bfssk" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.212654 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-audit-dir\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.212688 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grskb\" (UniqueName: \"kubernetes.io/projected/9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb-kube-api-access-grskb\") pod \"controller-manager-65b6cccf98-8ssgr\" (UID: \"9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8ssgr" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.212719 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.212743 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6bf41d48-487f-4123-9d83-bef3d1efaa68-etcd-ca\") pod \"etcd-operator-69b85846b6-7plfs\" (UID: \"6bf41d48-487f-4123-9d83-bef3d1efaa68\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7plfs" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.212770 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.212794 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fhsv6\" (UniqueName: \"kubernetes.io/projected/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-kube-api-access-fhsv6\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.212843 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/aaaae9da-7fe6-47c9-8a79-20343d5f6c43-metrics-tls\") pod \"dns-operator-799b87ffcd-cg4nl\" (UID: \"aaaae9da-7fe6-47c9-8a79-20343d5f6c43\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-cg4nl" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.212873 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.212893 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bf41d48-487f-4123-9d83-bef3d1efaa68-config\") pod \"etcd-operator-69b85846b6-7plfs\" (UID: \"6bf41d48-487f-4123-9d83-bef3d1efaa68\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7plfs" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.213683 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bf41d48-487f-4123-9d83-bef3d1efaa68-config\") pod \"etcd-operator-69b85846b6-7plfs\" (UID: \"6bf41d48-487f-4123-9d83-bef3d1efaa68\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7plfs" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.213731 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6bf41d48-487f-4123-9d83-bef3d1efaa68-tmp-dir\") pod \"etcd-operator-69b85846b6-7plfs\" (UID: \"6bf41d48-487f-4123-9d83-bef3d1efaa68\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7plfs" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.214124 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6bf41d48-487f-4123-9d83-bef3d1efaa68-etcd-service-ca\") pod \"etcd-operator-69b85846b6-7plfs\" (UID: \"6bf41d48-487f-4123-9d83-bef3d1efaa68\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7plfs" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.214166 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.214197 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-audit-dir\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.214849 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-audit-policies\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.215260 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6bf41d48-487f-4123-9d83-bef3d1efaa68-etcd-ca\") pod \"etcd-operator-69b85846b6-7plfs\" (UID: \"6bf41d48-487f-4123-9d83-bef3d1efaa68\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7plfs" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.215308 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.217436 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6bf41d48-487f-4123-9d83-bef3d1efaa68-serving-cert\") pod \"etcd-operator-69b85846b6-7plfs\" (UID: \"6bf41d48-487f-4123-9d83-bef3d1efaa68\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7plfs" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.217544 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.217680 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-46ljf" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.218097 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/be1e2506-51b5-4287-a99d-37df39a44ff7-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-bfssk\" (UID: \"be1e2506-51b5-4287-a99d-37df39a44ff7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bfssk" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.218388 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.218483 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.219289 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.219857 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.220067 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.220190 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.221340 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.223160 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.228744 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6bf41d48-487f-4123-9d83-bef3d1efaa68-etcd-client\") pod \"etcd-operator-69b85846b6-7plfs\" (UID: \"6bf41d48-487f-4123-9d83-bef3d1efaa68\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7plfs" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.229032 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4j4dd"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.229288 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-tgtdw" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.237883 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-tjq6l" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.247038 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-8chxh"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.247669 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4j4dd" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.248096 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.265920 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.267733 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-979dq"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.267949 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-8chxh" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.277866 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-ngcjz" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.283971 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-pcrzq"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.286046 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.286117 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-979dq" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.293351 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-hsvjl" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.296225 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-g57zz"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.301666 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-qknx7"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.305897 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.306882 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-jptld"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.307052 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-g57zz" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.307131 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-qknx7" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.307179 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-pcrzq" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.340298 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb-tmp\") pod \"controller-manager-65b6cccf98-8ssgr\" (UID: \"9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8ssgr" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.340366 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-grskb\" (UniqueName: \"kubernetes.io/projected/9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb-kube-api-access-grskb\") pod \"controller-manager-65b6cccf98-8ssgr\" (UID: \"9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8ssgr" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.340642 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/aaaae9da-7fe6-47c9-8a79-20343d5f6c43-metrics-tls\") pod \"dns-operator-799b87ffcd-cg4nl\" (UID: \"aaaae9da-7fe6-47c9-8a79-20343d5f6c43\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-cg4nl" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.340670 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb-client-ca\") pod \"controller-manager-65b6cccf98-8ssgr\" (UID: \"9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8ssgr" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.340717 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-8ssgr\" (UID: \"9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8ssgr" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.340752 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb-config\") pod \"controller-manager-65b6cccf98-8ssgr\" (UID: \"9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8ssgr" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.340780 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/aaaae9da-7fe6-47c9-8a79-20343d5f6c43-tmp-dir\") pod \"dns-operator-799b87ffcd-cg4nl\" (UID: \"aaaae9da-7fe6-47c9-8a79-20343d5f6c43\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-cg4nl" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.340824 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r5n6n\" (UniqueName: \"kubernetes.io/projected/aaaae9da-7fe6-47c9-8a79-20343d5f6c43-kube-api-access-r5n6n\") pod \"dns-operator-799b87ffcd-cg4nl\" (UID: \"aaaae9da-7fe6-47c9-8a79-20343d5f6c43\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-cg4nl" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.340852 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb-serving-cert\") pod \"controller-manager-65b6cccf98-8ssgr\" (UID: \"9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8ssgr" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.345437 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb-serving-cert\") pod \"controller-manager-65b6cccf98-8ssgr\" (UID: \"9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8ssgr" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.346349 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb-tmp\") pod \"controller-manager-65b6cccf98-8ssgr\" (UID: \"9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8ssgr" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.350230 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/aaaae9da-7fe6-47c9-8a79-20343d5f6c43-metrics-tls\") pod \"dns-operator-799b87ffcd-cg4nl\" (UID: \"aaaae9da-7fe6-47c9-8a79-20343d5f6c43\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-cg4nl" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.351647 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb-client-ca\") pod \"controller-manager-65b6cccf98-8ssgr\" (UID: \"9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8ssgr" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.353057 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-8ssgr\" (UID: \"9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8ssgr" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.354158 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb-config\") pod \"controller-manager-65b6cccf98-8ssgr\" (UID: \"9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8ssgr" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.354606 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.355275 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/aaaae9da-7fe6-47c9-8a79-20343d5f6c43-tmp-dir\") pod \"dns-operator-799b87ffcd-cg4nl\" (UID: \"aaaae9da-7fe6-47c9-8a79-20343d5f6c43\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-cg4nl" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.357937 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.369277 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.386076 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.467082 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.467980 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.468295 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.469882 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.476939 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-c49nj"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.480691 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-jptld" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.489237 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.545127 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.549533 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.552988 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.556273 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-c49nj" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.562773 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-7fnwf"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.591256 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.591538 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.619085 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.637047 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-n929c"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.638393 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-7fnwf" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.682473 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.682680 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.683497 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-55st2"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.704840 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-h4rkf"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.706102 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-n929c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.710769 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"serviceca\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.710831 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"pruner-dockercfg-rs58m\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.712510 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-l6gvr"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.713286 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-h4rkf" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.713653 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-55st2" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.718467 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489760-vd2k2"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.719291 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-l6gvr" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.763871 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-bjwbh"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.771366 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp8fj\" (UniqueName: \"kubernetes.io/projected/6bf41d48-487f-4123-9d83-bef3d1efaa68-kube-api-access-gp8fj\") pod \"etcd-operator-69b85846b6-7plfs\" (UID: \"6bf41d48-487f-4123-9d83-bef3d1efaa68\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7plfs" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.777567 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-2dt85"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.777725 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-bjwbh" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.777879 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-vd2k2" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.785490 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhsv6\" (UniqueName: \"kubernetes.io/projected/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-kube-api-access-fhsv6\") pod \"oauth-openshift-66458b6674-tg86c\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.795752 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbt2w\" (UniqueName: \"kubernetes.io/projected/be1e2506-51b5-4287-a99d-37df39a44ff7-kube-api-access-gbt2w\") pod \"cluster-samples-operator-6b564684c8-bfssk\" (UID: \"be1e2506-51b5-4287-a99d-37df39a44ff7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bfssk" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.798880 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-99xsn"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.799040 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2dt85" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.806146 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.809756 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-brrzw"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.809787 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-hsvjl"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.809811 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bfssk"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.809834 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-tjq6l"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.810006 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-99xsn" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.810170 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-5jp8q"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.810226 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-qdwb2"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.810242 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-tg86c"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.810257 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-ngcjz"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.810269 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-t49fp"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.810281 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-77449"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.810295 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-crnjv"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.810313 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-gpd7g"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.817623 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b2941d0-33a9-475b-9ed9-5534ffda45c6-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-gk4w9\" (UID: \"1b2941d0-33a9-475b-9ed9-5534ffda45c6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-gk4w9" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.817676 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2948eb5-9117-467a-89c9-742dde24b958-config\") pod \"route-controller-manager-776cdc94d6-rwwx4\" (UID: \"e2948eb5-9117-467a-89c9-742dde24b958\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rwwx4" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.817703 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-ca-trust-extracted\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.817721 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-registry-certificates\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.817766 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.817787 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clsvv\" (UniqueName: \"kubernetes.io/projected/e2948eb5-9117-467a-89c9-742dde24b958-kube-api-access-clsvv\") pod \"route-controller-manager-776cdc94d6-rwwx4\" (UID: \"e2948eb5-9117-467a-89c9-742dde24b958\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rwwx4" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.817827 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-registry-tls\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.817844 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-bound-sa-token\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.821208 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-8chxh"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.821262 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pzlvv"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.821426 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-gpd7g" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.817858 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e2948eb5-9117-467a-89c9-742dde24b958-client-ca\") pod \"route-controller-manager-776cdc94d6-rwwx4\" (UID: \"e2948eb5-9117-467a-89c9-742dde24b958\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rwwx4" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.823484 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-installation-pull-secrets\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.823622 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-trusted-ca\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.823731 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkqvj\" (UniqueName: \"kubernetes.io/projected/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-kube-api-access-tkqvj\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.823828 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhp28\" (UniqueName: \"kubernetes.io/projected/2438617c-3b33-4b33-971c-afaab481cfe6-kube-api-access-mhp28\") pod \"downloads-747b44746d-77449\" (UID: \"2438617c-3b33-4b33-971c-afaab481cfe6\") " pod="openshift-console/downloads-747b44746d-77449" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.823913 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e2948eb5-9117-467a-89c9-742dde24b958-tmp\") pod \"route-controller-manager-776cdc94d6-rwwx4\" (UID: \"e2948eb5-9117-467a-89c9-742dde24b958\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rwwx4" Jan 26 00:11:23 crc kubenswrapper[5110]: E0126 00:11:23.823930 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:24.323913565 +0000 UTC m=+141.552812174 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.824146 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2948eb5-9117-467a-89c9-742dde24b958-serving-cert\") pod \"route-controller-manager-776cdc94d6-rwwx4\" (UID: \"e2948eb5-9117-467a-89c9-742dde24b958\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rwwx4" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.824255 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5rck\" (UniqueName: \"kubernetes.io/projected/1b2941d0-33a9-475b-9ed9-5534ffda45c6-kube-api-access-k5rck\") pod \"kube-storage-version-migrator-operator-565b79b866-gk4w9\" (UID: \"1b2941d0-33a9-475b-9ed9-5534ffda45c6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-gk4w9" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.824392 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b2941d0-33a9-475b-9ed9-5534ffda45c6-config\") pod \"kube-storage-version-migrator-operator-565b79b866-gk4w9\" (UID: \"1b2941d0-33a9-475b-9ed9-5534ffda45c6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-gk4w9" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.825964 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.827998 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4j4dd"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.828039 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-qknx7"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.828052 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-67svm"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.828072 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-vl2cl"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.828337 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-pzlvv" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.845639 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.847105 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-hvgtl"] Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.847375 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-vl2cl" Jan 26 00:11:23 crc kubenswrapper[5110]: I0126 00:11:23.997528 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bfssk" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.003089 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-7plfs" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.003647 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.004534 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.004717 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-trusted-ca\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.004749 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tkqvj\" (UniqueName: \"kubernetes.io/projected/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-kube-api-access-tkqvj\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.004772 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mhp28\" (UniqueName: \"kubernetes.io/projected/2438617c-3b33-4b33-971c-afaab481cfe6-kube-api-access-mhp28\") pod \"downloads-747b44746d-77449\" (UID: \"2438617c-3b33-4b33-971c-afaab481cfe6\") " pod="openshift-console/downloads-747b44746d-77449" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.004810 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e2948eb5-9117-467a-89c9-742dde24b958-tmp\") pod \"route-controller-manager-776cdc94d6-rwwx4\" (UID: \"e2948eb5-9117-467a-89c9-742dde24b958\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rwwx4" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.004849 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/defa6a4f-7d53-4d4c-b0c7-cfb7489db02c-config-volume\") pod \"collect-profiles-29489760-vd2k2\" (UID: \"defa6a4f-7d53-4d4c-b0c7-cfb7489db02c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-vd2k2" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.004873 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n62zj\" (UniqueName: \"kubernetes.io/projected/1d983c82-449b-4550-986f-5d378560b332-kube-api-access-n62zj\") pod \"openshift-apiserver-operator-846cbfc458-8chxh\" (UID: \"1d983c82-449b-4550-986f-5d378560b332\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-8chxh" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.004895 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c451c2d9-b218-4494-93e4-200ef1a5eb75-serving-cert\") pod \"openshift-config-operator-5777786469-979dq\" (UID: \"c451c2d9-b218-4494-93e4-200ef1a5eb75\") " pod="openshift-config-operator/openshift-config-operator-5777786469-979dq" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.004918 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/925904bd-cb57-4356-a6df-e72bd716398b-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-qknx7\" (UID: \"925904bd-cb57-4356-a6df-e72bd716398b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-qknx7" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.004938 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/defa6a4f-7d53-4d4c-b0c7-cfb7489db02c-secret-volume\") pod \"collect-profiles-29489760-vd2k2\" (UID: \"defa6a4f-7d53-4d4c-b0c7-cfb7489db02c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-vd2k2" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.004981 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2948eb5-9117-467a-89c9-742dde24b958-serving-cert\") pod \"route-controller-manager-776cdc94d6-rwwx4\" (UID: \"e2948eb5-9117-467a-89c9-742dde24b958\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rwwx4" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.005008 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k5rck\" (UniqueName: \"kubernetes.io/projected/1b2941d0-33a9-475b-9ed9-5534ffda45c6-kube-api-access-k5rck\") pod \"kube-storage-version-migrator-operator-565b79b866-gk4w9\" (UID: \"1b2941d0-33a9-475b-9ed9-5534ffda45c6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-gk4w9" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.005036 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b2941d0-33a9-475b-9ed9-5534ffda45c6-config\") pod \"kube-storage-version-migrator-operator-565b79b866-gk4w9\" (UID: \"1b2941d0-33a9-475b-9ed9-5534ffda45c6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-gk4w9" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.005063 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/925904bd-cb57-4356-a6df-e72bd716398b-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-qknx7\" (UID: \"925904bd-cb57-4356-a6df-e72bd716398b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-qknx7" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.005101 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b2941d0-33a9-475b-9ed9-5534ffda45c6-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-gk4w9\" (UID: \"1b2941d0-33a9-475b-9ed9-5534ffda45c6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-gk4w9" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.005161 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2948eb5-9117-467a-89c9-742dde24b958-config\") pod \"route-controller-manager-776cdc94d6-rwwx4\" (UID: \"e2948eb5-9117-467a-89c9-742dde24b958\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rwwx4" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.005181 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-ca-trust-extracted\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.005199 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-registry-certificates\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.005216 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/925904bd-cb57-4356-a6df-e72bd716398b-config\") pod \"authentication-operator-7f5c659b84-qknx7\" (UID: \"925904bd-cb57-4356-a6df-e72bd716398b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-qknx7" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.005235 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmls6\" (UniqueName: \"kubernetes.io/projected/925904bd-cb57-4356-a6df-e72bd716398b-kube-api-access-kmls6\") pod \"authentication-operator-7f5c659b84-qknx7\" (UID: \"925904bd-cb57-4356-a6df-e72bd716398b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-qknx7" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.005250 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwjd4\" (UniqueName: \"kubernetes.io/projected/defa6a4f-7d53-4d4c-b0c7-cfb7489db02c-kube-api-access-bwjd4\") pod \"collect-profiles-29489760-vd2k2\" (UID: \"defa6a4f-7d53-4d4c-b0c7-cfb7489db02c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-vd2k2" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.005265 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d983c82-449b-4550-986f-5d378560b332-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-8chxh\" (UID: \"1d983c82-449b-4550-986f-5d378560b332\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-8chxh" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.005279 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d983c82-449b-4550-986f-5d378560b332-config\") pod \"openshift-apiserver-operator-846cbfc458-8chxh\" (UID: \"1d983c82-449b-4550-986f-5d378560b332\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-8chxh" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.005305 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcd2z\" (UniqueName: \"kubernetes.io/projected/1e6d8af4-164c-4fc2-9032-596a1de18a11-kube-api-access-vcd2z\") pod \"migrator-866fcbc849-c49nj\" (UID: \"1e6d8af4-164c-4fc2-9032-596a1de18a11\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-c49nj" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.005322 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4nnv\" (UniqueName: \"kubernetes.io/projected/3451ac5b-8ad7-4419-b262-ec54012b9dc6-kube-api-access-m4nnv\") pod \"dns-default-vl2cl\" (UID: \"3451ac5b-8ad7-4419-b262-ec54012b9dc6\") " pod="openshift-dns/dns-default-vl2cl" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.005363 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-clsvv\" (UniqueName: \"kubernetes.io/projected/e2948eb5-9117-467a-89c9-742dde24b958-kube-api-access-clsvv\") pod \"route-controller-manager-776cdc94d6-rwwx4\" (UID: \"e2948eb5-9117-467a-89c9-742dde24b958\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rwwx4" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.005394 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3451ac5b-8ad7-4419-b262-ec54012b9dc6-config-volume\") pod \"dns-default-vl2cl\" (UID: \"3451ac5b-8ad7-4419-b262-ec54012b9dc6\") " pod="openshift-dns/dns-default-vl2cl" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.005411 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3451ac5b-8ad7-4419-b262-ec54012b9dc6-metrics-tls\") pod \"dns-default-vl2cl\" (UID: \"3451ac5b-8ad7-4419-b262-ec54012b9dc6\") " pod="openshift-dns/dns-default-vl2cl" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.005436 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-registry-tls\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.005457 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-bound-sa-token\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.005473 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e2948eb5-9117-467a-89c9-742dde24b958-client-ca\") pod \"route-controller-manager-776cdc94d6-rwwx4\" (UID: \"e2948eb5-9117-467a-89c9-742dde24b958\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rwwx4" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.005497 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/925904bd-cb57-4356-a6df-e72bd716398b-serving-cert\") pod \"authentication-operator-7f5c659b84-qknx7\" (UID: \"925904bd-cb57-4356-a6df-e72bd716398b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-qknx7" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.005518 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-installation-pull-secrets\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.005534 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3451ac5b-8ad7-4419-b262-ec54012b9dc6-tmp-dir\") pod \"dns-default-vl2cl\" (UID: \"3451ac5b-8ad7-4419-b262-ec54012b9dc6\") " pod="openshift-dns/dns-default-vl2cl" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.005556 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c451c2d9-b218-4494-93e4-200ef1a5eb75-available-featuregates\") pod \"openshift-config-operator-5777786469-979dq\" (UID: \"c451c2d9-b218-4494-93e4-200ef1a5eb75\") " pod="openshift-config-operator/openshift-config-operator-5777786469-979dq" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.005572 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdjsf\" (UniqueName: \"kubernetes.io/projected/c451c2d9-b218-4494-93e4-200ef1a5eb75-kube-api-access-wdjsf\") pod \"openshift-config-operator-5777786469-979dq\" (UID: \"c451c2d9-b218-4494-93e4-200ef1a5eb75\") " pod="openshift-config-operator/openshift-config-operator-5777786469-979dq" Jan 26 00:11:24 crc kubenswrapper[5110]: E0126 00:11:24.007208 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:24.507187794 +0000 UTC m=+141.736086393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.007328 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-rwwx4"] Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.007370 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-gk4w9"] Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.007385 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-g57zz"] Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.007401 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-2gvm5"] Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.007414 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-46ljf"] Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.007429 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-c5l6l"] Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.007441 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-cg4nl"] Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.007453 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-c49nj"] Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.007468 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-8ssgr"] Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.007480 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-pcrzq"] Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.007492 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489760-vd2k2"] Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.007504 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-979dq"] Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.007516 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-l6gvr"] Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.007527 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-jptld"] Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.007543 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-7plfs"] Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.007554 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-vl2cl"] Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.007566 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-2dt85"] Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.007578 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-7fnwf"] Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.007590 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-55st2"] Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.007602 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-bjwbh"] Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.007612 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29489760-jjbnv"] Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.007624 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pzlvv"] Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.007642 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-bqdwj"] Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.008440 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-trusted-ca\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.008971 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e2948eb5-9117-467a-89c9-742dde24b958-tmp\") pod \"route-controller-manager-776cdc94d6-rwwx4\" (UID: \"e2948eb5-9117-467a-89c9-742dde24b958\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rwwx4" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.011188 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.011629 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.011637 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.012786 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b2941d0-33a9-475b-9ed9-5534ffda45c6-config\") pod \"kube-storage-version-migrator-operator-565b79b866-gk4w9\" (UID: \"1b2941d0-33a9-475b-9ed9-5534ffda45c6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-gk4w9" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.013246 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2948eb5-9117-467a-89c9-742dde24b958-config\") pod \"route-controller-manager-776cdc94d6-rwwx4\" (UID: \"e2948eb5-9117-467a-89c9-742dde24b958\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rwwx4" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.014511 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.014573 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.014943 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-h4rkf"] Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.014981 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-bqdwj"] Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.015005 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-99xsn"] Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.015079 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.015337 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.020514 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-bqdwj" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.021381 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.021429 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-ca-trust-extracted\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.022164 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-registry-tls\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.022349 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2948eb5-9117-467a-89c9-742dde24b958-serving-cert\") pod \"route-controller-manager-776cdc94d6-rwwx4\" (UID: \"e2948eb5-9117-467a-89c9-742dde24b958\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rwwx4" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.024119 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-hvgtl" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.034784 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-registry-certificates\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.037618 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-installation-pull-secrets\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.042536 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.042719 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e2948eb5-9117-467a-89c9-742dde24b958-client-ca\") pod \"route-controller-manager-776cdc94d6-rwwx4\" (UID: \"e2948eb5-9117-467a-89c9-742dde24b958\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rwwx4" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.051510 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.074825 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b2941d0-33a9-475b-9ed9-5534ffda45c6-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-gk4w9\" (UID: \"1b2941d0-33a9-475b-9ed9-5534ffda45c6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-gk4w9" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.074912 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.108063 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.108968 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.119456 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kmls6\" (UniqueName: \"kubernetes.io/projected/925904bd-cb57-4356-a6df-e72bd716398b-kube-api-access-kmls6\") pod \"authentication-operator-7f5c659b84-qknx7\" (UID: \"925904bd-cb57-4356-a6df-e72bd716398b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-qknx7" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.119499 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bwjd4\" (UniqueName: \"kubernetes.io/projected/defa6a4f-7d53-4d4c-b0c7-cfb7489db02c-kube-api-access-bwjd4\") pod \"collect-profiles-29489760-vd2k2\" (UID: \"defa6a4f-7d53-4d4c-b0c7-cfb7489db02c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-vd2k2" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.119520 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d983c82-449b-4550-986f-5d378560b332-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-8chxh\" (UID: \"1d983c82-449b-4550-986f-5d378560b332\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-8chxh" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.119545 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/49efa7f2-d990-4552-80cf-5d4a72a32ec7-stats-auth\") pod \"router-default-68cf44c8b8-n929c\" (UID: \"49efa7f2-d990-4552-80cf-5d4a72a32ec7\") " pod="openshift-ingress/router-default-68cf44c8b8-n929c" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.119581 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4jkm\" (UniqueName: \"kubernetes.io/projected/ddb3d5eb-9401-4032-9bcd-c2f798fbaf51-kube-api-access-b4jkm\") pod \"cluster-image-registry-operator-86c45576b9-t49fp\" (UID: \"ddb3d5eb-9401-4032-9bcd-c2f798fbaf51\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t49fp" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.119606 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.119624 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ddb3d5eb-9401-4032-9bcd-c2f798fbaf51-tmp\") pod \"cluster-image-registry-operator-86c45576b9-t49fp\" (UID: \"ddb3d5eb-9401-4032-9bcd-c2f798fbaf51\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t49fp" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.119646 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4ba4d5a4-755b-4d6e-b250-cd705b244775-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-bjwbh\" (UID: \"4ba4d5a4-755b-4d6e-b250-cd705b244775\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bjwbh" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.119661 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhpqv\" (UniqueName: \"kubernetes.io/projected/4ba4d5a4-755b-4d6e-b250-cd705b244775-kube-api-access-mhpqv\") pod \"marketplace-operator-547dbd544d-bjwbh\" (UID: \"4ba4d5a4-755b-4d6e-b250-cd705b244775\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bjwbh" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.119696 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ddb3d5eb-9401-4032-9bcd-c2f798fbaf51-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-t49fp\" (UID: \"ddb3d5eb-9401-4032-9bcd-c2f798fbaf51\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t49fp" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.119727 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3451ac5b-8ad7-4419-b262-ec54012b9dc6-tmp-dir\") pod \"dns-default-vl2cl\" (UID: \"3451ac5b-8ad7-4419-b262-ec54012b9dc6\") " pod="openshift-dns/dns-default-vl2cl" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.119746 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3451ac5b-8ad7-4419-b262-ec54012b9dc6-metrics-tls\") pod \"dns-default-vl2cl\" (UID: \"3451ac5b-8ad7-4419-b262-ec54012b9dc6\") " pod="openshift-dns/dns-default-vl2cl" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.119763 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6dh9\" (UniqueName: \"kubernetes.io/projected/5f875209-3895-465c-9992-37ef91c4dda9-kube-api-access-t6dh9\") pod \"machine-config-controller-f9cdd68f7-55st2\" (UID: \"5f875209-3895-465c-9992-37ef91c4dda9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-55st2" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.119778 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/16e64d17-7476-4caf-af16-73d93e2c1085-webhook-cert\") pod \"packageserver-7d4fc7d867-7fnwf\" (UID: \"16e64d17-7476-4caf-af16-73d93e2c1085\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-7fnwf" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.119810 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/4464550b-e8e5-4f07-8e54-40eb792eb201-csi-data-dir\") pod \"csi-hostpathplugin-pzlvv\" (UID: \"4464550b-e8e5-4f07-8e54-40eb792eb201\") " pod="hostpath-provisioner/csi-hostpathplugin-pzlvv" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.119833 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6m5r\" (UniqueName: \"kubernetes.io/projected/497eadd3-187e-4dfb-82c3-fad0d59eb723-kube-api-access-b6m5r\") pod \"image-pruner-29489760-jjbnv\" (UID: \"497eadd3-187e-4dfb-82c3-fad0d59eb723\") " pod="openshift-image-registry/image-pruner-29489760-jjbnv" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.119860 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/4464550b-e8e5-4f07-8e54-40eb792eb201-plugins-dir\") pod \"csi-hostpathplugin-pzlvv\" (UID: \"4464550b-e8e5-4f07-8e54-40eb792eb201\") " pod="hostpath-provisioner/csi-hostpathplugin-pzlvv" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.119884 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ddb3d5eb-9401-4032-9bcd-c2f798fbaf51-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-t49fp\" (UID: \"ddb3d5eb-9401-4032-9bcd-c2f798fbaf51\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t49fp" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.119916 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3bcb987d-ffcc-436f-b7fb-75a4d2e4952d-trusted-ca\") pod \"console-operator-67c89758df-67svm\" (UID: \"3bcb987d-ffcc-436f-b7fb-75a4d2e4952d\") " pod="openshift-console-operator/console-operator-67c89758df-67svm" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.119943 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/925904bd-cb57-4356-a6df-e72bd716398b-serving-cert\") pod \"authentication-operator-7f5c659b84-qknx7\" (UID: \"925904bd-cb57-4356-a6df-e72bd716398b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-qknx7" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.119959 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1eb870e-2fb9-4e38-94b6-80fb61ed6ad3-serving-cert\") pod \"service-ca-operator-5b9c976747-2dt85\" (UID: \"d1eb870e-2fb9-4e38-94b6-80fb61ed6ad3\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2dt85" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.119982 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c451c2d9-b218-4494-93e4-200ef1a5eb75-available-featuregates\") pod \"openshift-config-operator-5777786469-979dq\" (UID: \"c451c2d9-b218-4494-93e4-200ef1a5eb75\") " pod="openshift-config-operator/openshift-config-operator-5777786469-979dq" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.119997 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfnsx\" (UniqueName: \"kubernetes.io/projected/d1eb870e-2fb9-4e38-94b6-80fb61ed6ad3-kube-api-access-gfnsx\") pod \"service-ca-operator-5b9c976747-2dt85\" (UID: \"d1eb870e-2fb9-4e38-94b6-80fb61ed6ad3\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2dt85" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.120014 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1cea4c4b-a35b-4b39-a268-15234da1bc4a-auth-proxy-config\") pod \"machine-approver-54c688565-tgtdw\" (UID: \"1cea4c4b-a35b-4b39-a268-15234da1bc4a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-tgtdw" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.120041 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b424\" (UniqueName: \"kubernetes.io/projected/9cf38d4d-d415-4e81-85c3-bb2c65f7423b-kube-api-access-9b424\") pod \"machine-config-operator-67c9d58cbb-4j4dd\" (UID: \"9cf38d4d-d415-4e81-85c3-bb2c65f7423b\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4j4dd" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.120065 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ce4b4a91-4053-4fb9-be7e-ac51e71a829e-profile-collector-cert\") pod \"olm-operator-5cdf44d969-99xsn\" (UID: \"ce4b4a91-4053-4fb9-be7e-ac51e71a829e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-99xsn" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.120085 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p894f\" (UniqueName: \"kubernetes.io/projected/ce4b4a91-4053-4fb9-be7e-ac51e71a829e-kube-api-access-p894f\") pod \"olm-operator-5cdf44d969-99xsn\" (UID: \"ce4b4a91-4053-4fb9-be7e-ac51e71a829e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-99xsn" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.120103 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d7cb0851-2110-44f5-b3fc-77478d1fd49f-webhook-certs\") pod \"multus-admission-controller-69db94689b-g57zz\" (UID: \"d7cb0851-2110-44f5-b3fc-77478d1fd49f\") " pod="openshift-multus/multus-admission-controller-69db94689b-g57zz" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.120128 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ddb3d5eb-9401-4032-9bcd-c2f798fbaf51-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-t49fp\" (UID: \"ddb3d5eb-9401-4032-9bcd-c2f798fbaf51\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t49fp" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.120150 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n62zj\" (UniqueName: \"kubernetes.io/projected/1d983c82-449b-4550-986f-5d378560b332-kube-api-access-n62zj\") pod \"openshift-apiserver-operator-846cbfc458-8chxh\" (UID: \"1d983c82-449b-4550-986f-5d378560b332\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-8chxh" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.120165 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cea4c4b-a35b-4b39-a268-15234da1bc4a-config\") pod \"machine-approver-54c688565-tgtdw\" (UID: \"1cea4c4b-a35b-4b39-a268-15234da1bc4a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-tgtdw" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.120183 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c451c2d9-b218-4494-93e4-200ef1a5eb75-serving-cert\") pod \"openshift-config-operator-5777786469-979dq\" (UID: \"c451c2d9-b218-4494-93e4-200ef1a5eb75\") " pod="openshift-config-operator/openshift-config-operator-5777786469-979dq" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.120198 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24hmk\" (UniqueName: \"kubernetes.io/projected/4464550b-e8e5-4f07-8e54-40eb792eb201-kube-api-access-24hmk\") pod \"csi-hostpathplugin-pzlvv\" (UID: \"4464550b-e8e5-4f07-8e54-40eb792eb201\") " pod="hostpath-provisioner/csi-hostpathplugin-pzlvv" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.120226 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9cf38d4d-d415-4e81-85c3-bb2c65f7423b-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-4j4dd\" (UID: \"9cf38d4d-d415-4e81-85c3-bb2c65f7423b\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4j4dd" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.120243 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/765dcc91-6c2c-4e8e-a85d-b083d0a8e261-node-bootstrap-token\") pod \"machine-config-server-gpd7g\" (UID: \"765dcc91-6c2c-4e8e-a85d-b083d0a8e261\") " pod="openshift-machine-config-operator/machine-config-server-gpd7g" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.120260 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10d2b7d6-0a18-4f5c-b3b8-2c6f34e88730-config\") pod \"machine-api-operator-755bb95488-jptld\" (UID: \"10d2b7d6-0a18-4f5c-b3b8-2c6f34e88730\") " pod="openshift-machine-api/machine-api-operator-755bb95488-jptld" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.120278 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kjn2\" (UniqueName: \"kubernetes.io/projected/1cea4c4b-a35b-4b39-a268-15234da1bc4a-kube-api-access-6kjn2\") pod \"machine-approver-54c688565-tgtdw\" (UID: \"1cea4c4b-a35b-4b39-a268-15234da1bc4a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-tgtdw" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.120294 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/77aa983a-f3c1-4799-a84b-c3c7a381a1bc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-crnjv\" (UID: \"77aa983a-f3c1-4799-a84b-c3c7a381a1bc\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-crnjv" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.120319 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/925904bd-cb57-4356-a6df-e72bd716398b-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-qknx7\" (UID: \"925904bd-cb57-4356-a6df-e72bd716398b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-qknx7" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.120344 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/10d2b7d6-0a18-4f5c-b3b8-2c6f34e88730-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-jptld\" (UID: \"10d2b7d6-0a18-4f5c-b3b8-2c6f34e88730\") " pod="openshift-machine-api/machine-api-operator-755bb95488-jptld" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.120360 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj9k5\" (UniqueName: \"kubernetes.io/projected/765dcc91-6c2c-4e8e-a85d-b083d0a8e261-kube-api-access-tj9k5\") pod \"machine-config-server-gpd7g\" (UID: \"765dcc91-6c2c-4e8e-a85d-b083d0a8e261\") " pod="openshift-machine-config-operator/machine-config-server-gpd7g" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.120376 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bcb987d-ffcc-436f-b7fb-75a4d2e4952d-serving-cert\") pod \"console-operator-67c89758df-67svm\" (UID: \"3bcb987d-ffcc-436f-b7fb-75a4d2e4952d\") " pod="openshift-console-operator/console-operator-67c89758df-67svm" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.120397 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4ba4d5a4-755b-4d6e-b250-cd705b244775-tmp\") pod \"marketplace-operator-547dbd544d-bjwbh\" (UID: \"4ba4d5a4-755b-4d6e-b250-cd705b244775\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bjwbh" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.120411 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/765dcc91-6c2c-4e8e-a85d-b083d0a8e261-certs\") pod \"machine-config-server-gpd7g\" (UID: \"765dcc91-6c2c-4e8e-a85d-b083d0a8e261\") " pod="openshift-machine-config-operator/machine-config-server-gpd7g" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.120429 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/925904bd-cb57-4356-a6df-e72bd716398b-config\") pod \"authentication-operator-7f5c659b84-qknx7\" (UID: \"925904bd-cb57-4356-a6df-e72bd716398b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-qknx7" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.120449 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/49efa7f2-d990-4552-80cf-5d4a72a32ec7-metrics-certs\") pod \"router-default-68cf44c8b8-n929c\" (UID: \"49efa7f2-d990-4552-80cf-5d4a72a32ec7\") " pod="openshift-ingress/router-default-68cf44c8b8-n929c" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.120476 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g67j4\" (UniqueName: \"kubernetes.io/projected/3a884209-147f-4786-9a92-5eb1a685dc1b-kube-api-access-g67j4\") pod \"package-server-manager-77f986bd66-h4rkf\" (UID: \"3a884209-147f-4786-9a92-5eb1a685dc1b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-h4rkf" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.120501 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d983c82-449b-4550-986f-5d378560b332-config\") pod \"openshift-apiserver-operator-846cbfc458-8chxh\" (UID: \"1d983c82-449b-4550-986f-5d378560b332\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-8chxh" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.120517 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b4c71ca-e2f6-48d9-832a-1eca4fbcd9f3-config\") pod \"openshift-kube-scheduler-operator-54f497555d-c5l6l\" (UID: \"6b4c71ca-e2f6-48d9-832a-1eca4fbcd9f3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-c5l6l" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.120534 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/497eadd3-187e-4dfb-82c3-fad0d59eb723-serviceca\") pod \"image-pruner-29489760-jjbnv\" (UID: \"497eadd3-187e-4dfb-82c3-fad0d59eb723\") " pod="openshift-image-registry/image-pruner-29489760-jjbnv" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.120621 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1cea4c4b-a35b-4b39-a268-15234da1bc4a-machine-approver-tls\") pod \"machine-approver-54c688565-tgtdw\" (UID: \"1cea4c4b-a35b-4b39-a268-15234da1bc4a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-tgtdw" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.120684 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/0be01a81-9d33-487a-abf9-0284a7b3f24b-signing-cabundle\") pod \"service-ca-74545575db-l6gvr\" (UID: \"0be01a81-9d33-487a-abf9-0284a7b3f24b\") " pod="openshift-service-ca/service-ca-74545575db-l6gvr" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.120749 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m4nnv\" (UniqueName: \"kubernetes.io/projected/3451ac5b-8ad7-4419-b262-ec54012b9dc6-kube-api-access-m4nnv\") pod \"dns-default-vl2cl\" (UID: \"3451ac5b-8ad7-4419-b262-ec54012b9dc6\") " pod="openshift-dns/dns-default-vl2cl" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.120777 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ce4b4a91-4053-4fb9-be7e-ac51e71a829e-tmpfs\") pod \"olm-operator-5cdf44d969-99xsn\" (UID: \"ce4b4a91-4053-4fb9-be7e-ac51e71a829e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-99xsn" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.120817 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/ddb3d5eb-9401-4032-9bcd-c2f798fbaf51-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-t49fp\" (UID: \"ddb3d5eb-9401-4032-9bcd-c2f798fbaf51\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t49fp" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.120854 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9cf38d4d-d415-4e81-85c3-bb2c65f7423b-images\") pod \"machine-config-operator-67c9d58cbb-4j4dd\" (UID: \"9cf38d4d-d415-4e81-85c3-bb2c65f7423b\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4j4dd" Jan 26 00:11:24 crc kubenswrapper[5110]: E0126 00:11:24.120921 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:24.620905952 +0000 UTC m=+141.849804561 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.121301 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6b4c71ca-e2f6-48d9-832a-1eca4fbcd9f3-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-c5l6l\" (UID: \"6b4c71ca-e2f6-48d9-832a-1eca4fbcd9f3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-c5l6l" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.121322 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6b4c71ca-e2f6-48d9-832a-1eca4fbcd9f3-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-c5l6l\" (UID: \"6b4c71ca-e2f6-48d9-832a-1eca4fbcd9f3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-c5l6l" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.121347 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3451ac5b-8ad7-4419-b262-ec54012b9dc6-config-volume\") pod \"dns-default-vl2cl\" (UID: \"3451ac5b-8ad7-4419-b262-ec54012b9dc6\") " pod="openshift-dns/dns-default-vl2cl" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.121364 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e17b81d4-9c4f-44ee-acaa-e008890ca2fd-srv-cert\") pod \"catalog-operator-75ff9f647d-pcrzq\" (UID: \"e17b81d4-9c4f-44ee-acaa-e008890ca2fd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-pcrzq" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.121381 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79nbg\" (UniqueName: \"kubernetes.io/projected/16e64d17-7476-4caf-af16-73d93e2c1085-kube-api-access-79nbg\") pod \"packageserver-7d4fc7d867-7fnwf\" (UID: \"16e64d17-7476-4caf-af16-73d93e2c1085\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-7fnwf" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.121400 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1eb870e-2fb9-4e38-94b6-80fb61ed6ad3-config\") pod \"service-ca-operator-5b9c976747-2dt85\" (UID: \"d1eb870e-2fb9-4e38-94b6-80fb61ed6ad3\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2dt85" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.121415 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4464550b-e8e5-4f07-8e54-40eb792eb201-socket-dir\") pod \"csi-hostpathplugin-pzlvv\" (UID: \"4464550b-e8e5-4f07-8e54-40eb792eb201\") " pod="hostpath-provisioner/csi-hostpathplugin-pzlvv" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.121431 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/16e64d17-7476-4caf-af16-73d93e2c1085-apiservice-cert\") pod \"packageserver-7d4fc7d867-7fnwf\" (UID: \"16e64d17-7476-4caf-af16-73d93e2c1085\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-7fnwf" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.121481 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqdts\" (UniqueName: \"kubernetes.io/projected/49efa7f2-d990-4552-80cf-5d4a72a32ec7-kube-api-access-hqdts\") pod \"router-default-68cf44c8b8-n929c\" (UID: \"49efa7f2-d990-4552-80cf-5d4a72a32ec7\") " pod="openshift-ingress/router-default-68cf44c8b8-n929c" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.121497 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bcb987d-ffcc-436f-b7fb-75a4d2e4952d-config\") pod \"console-operator-67c89758df-67svm\" (UID: \"3bcb987d-ffcc-436f-b7fb-75a4d2e4952d\") " pod="openshift-console-operator/console-operator-67c89758df-67svm" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.121525 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wdjsf\" (UniqueName: \"kubernetes.io/projected/c451c2d9-b218-4494-93e4-200ef1a5eb75-kube-api-access-wdjsf\") pod \"openshift-config-operator-5777786469-979dq\" (UID: \"c451c2d9-b218-4494-93e4-200ef1a5eb75\") " pod="openshift-config-operator/openshift-config-operator-5777786469-979dq" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.121541 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e17b81d4-9c4f-44ee-acaa-e008890ca2fd-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-pcrzq\" (UID: \"e17b81d4-9c4f-44ee-acaa-e008890ca2fd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-pcrzq" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.121557 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ce4b4a91-4053-4fb9-be7e-ac51e71a829e-srv-cert\") pod \"olm-operator-5cdf44d969-99xsn\" (UID: \"ce4b4a91-4053-4fb9-be7e-ac51e71a829e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-99xsn" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.121574 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbpsj\" (UniqueName: \"kubernetes.io/projected/0be01a81-9d33-487a-abf9-0284a7b3f24b-kube-api-access-kbpsj\") pod \"service-ca-74545575db-l6gvr\" (UID: \"0be01a81-9d33-487a-abf9-0284a7b3f24b\") " pod="openshift-service-ca/service-ca-74545575db-l6gvr" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.121590 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5fxf\" (UniqueName: \"kubernetes.io/projected/77aa983a-f3c1-4799-a84b-c3c7a381a1bc-kube-api-access-k5fxf\") pod \"control-plane-machine-set-operator-75ffdb6fcd-crnjv\" (UID: \"77aa983a-f3c1-4799-a84b-c3c7a381a1bc\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-crnjv" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.121613 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/defa6a4f-7d53-4d4c-b0c7-cfb7489db02c-secret-volume\") pod \"collect-profiles-29489760-vd2k2\" (UID: \"defa6a4f-7d53-4d4c-b0c7-cfb7489db02c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-vd2k2" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.128564 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e17b81d4-9c4f-44ee-acaa-e008890ca2fd-tmpfs\") pod \"catalog-operator-75ff9f647d-pcrzq\" (UID: \"e17b81d4-9c4f-44ee-acaa-e008890ca2fd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-pcrzq" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.128663 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9cf38d4d-d415-4e81-85c3-bb2c65f7423b-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-4j4dd\" (UID: \"9cf38d4d-d415-4e81-85c3-bb2c65f7423b\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4j4dd" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.128706 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/4464550b-e8e5-4f07-8e54-40eb792eb201-mountpoint-dir\") pod \"csi-hostpathplugin-pzlvv\" (UID: \"4464550b-e8e5-4f07-8e54-40eb792eb201\") " pod="hostpath-provisioner/csi-hostpathplugin-pzlvv" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.128755 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/defa6a4f-7d53-4d4c-b0c7-cfb7489db02c-config-volume\") pod \"collect-profiles-29489760-vd2k2\" (UID: \"defa6a4f-7d53-4d4c-b0c7-cfb7489db02c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-vd2k2" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.128795 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5f875209-3895-465c-9992-37ef91c4dda9-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-55st2\" (UID: \"5f875209-3895-465c-9992-37ef91c4dda9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-55st2" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.130279 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d983c82-449b-4550-986f-5d378560b332-config\") pod \"openshift-apiserver-operator-846cbfc458-8chxh\" (UID: \"1d983c82-449b-4550-986f-5d378560b332\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-8chxh" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.130439 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3451ac5b-8ad7-4419-b262-ec54012b9dc6-tmp-dir\") pod \"dns-default-vl2cl\" (UID: \"3451ac5b-8ad7-4419-b262-ec54012b9dc6\") " pod="openshift-dns/dns-default-vl2cl" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.130471 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c451c2d9-b218-4494-93e4-200ef1a5eb75-available-featuregates\") pod \"openshift-config-operator-5777786469-979dq\" (UID: \"c451c2d9-b218-4494-93e4-200ef1a5eb75\") " pod="openshift-config-operator/openshift-config-operator-5777786469-979dq" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.131667 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c451c2d9-b218-4494-93e4-200ef1a5eb75-serving-cert\") pod \"openshift-config-operator-5777786469-979dq\" (UID: \"c451c2d9-b218-4494-93e4-200ef1a5eb75\") " pod="openshift-config-operator/openshift-config-operator-5777786469-979dq" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.132375 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/925904bd-cb57-4356-a6df-e72bd716398b-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-qknx7\" (UID: \"925904bd-cb57-4356-a6df-e72bd716398b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-qknx7" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.132407 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/16e64d17-7476-4caf-af16-73d93e2c1085-tmpfs\") pod \"packageserver-7d4fc7d867-7fnwf\" (UID: \"16e64d17-7476-4caf-af16-73d93e2c1085\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-7fnwf" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.132435 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3a884209-147f-4786-9a92-5eb1a685dc1b-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-h4rkf\" (UID: \"3a884209-147f-4786-9a92-5eb1a685dc1b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-h4rkf" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.132470 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq9mh\" (UniqueName: \"kubernetes.io/projected/e17b81d4-9c4f-44ee-acaa-e008890ca2fd-kube-api-access-cq9mh\") pod \"catalog-operator-75ff9f647d-pcrzq\" (UID: \"e17b81d4-9c4f-44ee-acaa-e008890ca2fd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-pcrzq" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.132493 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4ba4d5a4-755b-4d6e-b250-cd705b244775-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-bjwbh\" (UID: \"4ba4d5a4-755b-4d6e-b250-cd705b244775\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bjwbh" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.132516 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4464550b-e8e5-4f07-8e54-40eb792eb201-registration-dir\") pod \"csi-hostpathplugin-pzlvv\" (UID: \"4464550b-e8e5-4f07-8e54-40eb792eb201\") " pod="hostpath-provisioner/csi-hostpathplugin-pzlvv" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.132561 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sklkw\" (UniqueName: \"kubernetes.io/projected/3bcb987d-ffcc-436f-b7fb-75a4d2e4952d-kube-api-access-sklkw\") pod \"console-operator-67c89758df-67svm\" (UID: \"3bcb987d-ffcc-436f-b7fb-75a4d2e4952d\") " pod="openshift-console-operator/console-operator-67c89758df-67svm" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.132588 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/49efa7f2-d990-4552-80cf-5d4a72a32ec7-default-certificate\") pod \"router-default-68cf44c8b8-n929c\" (UID: \"49efa7f2-d990-4552-80cf-5d4a72a32ec7\") " pod="openshift-ingress/router-default-68cf44c8b8-n929c" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.132619 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49efa7f2-d990-4552-80cf-5d4a72a32ec7-service-ca-bundle\") pod \"router-default-68cf44c8b8-n929c\" (UID: \"49efa7f2-d990-4552-80cf-5d4a72a32ec7\") " pod="openshift-ingress/router-default-68cf44c8b8-n929c" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.132645 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db2dq\" (UniqueName: \"kubernetes.io/projected/d7cb0851-2110-44f5-b3fc-77478d1fd49f-kube-api-access-db2dq\") pod \"multus-admission-controller-69db94689b-g57zz\" (UID: \"d7cb0851-2110-44f5-b3fc-77478d1fd49f\") " pod="openshift-multus/multus-admission-controller-69db94689b-g57zz" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.132676 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5f875209-3895-465c-9992-37ef91c4dda9-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-55st2\" (UID: \"5f875209-3895-465c-9992-37ef91c4dda9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-55st2" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.132697 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvhfg\" (UniqueName: \"kubernetes.io/projected/10d2b7d6-0a18-4f5c-b3b8-2c6f34e88730-kube-api-access-bvhfg\") pod \"machine-api-operator-755bb95488-jptld\" (UID: \"10d2b7d6-0a18-4f5c-b3b8-2c6f34e88730\") " pod="openshift-machine-api/machine-api-operator-755bb95488-jptld" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.144440 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d983c82-449b-4550-986f-5d378560b332-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-8chxh\" (UID: \"1d983c82-449b-4550-986f-5d378560b332\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-8chxh" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.144576 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/0be01a81-9d33-487a-abf9-0284a7b3f24b-signing-key\") pod \"service-ca-74545575db-l6gvr\" (UID: \"0be01a81-9d33-487a-abf9-0284a7b3f24b\") " pod="openshift-service-ca/service-ca-74545575db-l6gvr" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.144649 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b4c71ca-e2f6-48d9-832a-1eca4fbcd9f3-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-c5l6l\" (UID: \"6b4c71ca-e2f6-48d9-832a-1eca4fbcd9f3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-c5l6l" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.144689 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vcd2z\" (UniqueName: \"kubernetes.io/projected/1e6d8af4-164c-4fc2-9032-596a1de18a11-kube-api-access-vcd2z\") pod \"migrator-866fcbc849-c49nj\" (UID: \"1e6d8af4-164c-4fc2-9032-596a1de18a11\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-c49nj" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.144851 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/10d2b7d6-0a18-4f5c-b3b8-2c6f34e88730-images\") pod \"machine-api-operator-755bb95488-jptld\" (UID: \"10d2b7d6-0a18-4f5c-b3b8-2c6f34e88730\") " pod="openshift-machine-api/machine-api-operator-755bb95488-jptld" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.149765 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.150079 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.167427 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.187321 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.205958 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.251822 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.251928 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/10d2b7d6-0a18-4f5c-b3b8-2c6f34e88730-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-jptld\" (UID: \"10d2b7d6-0a18-4f5c-b3b8-2c6f34e88730\") " pod="openshift-machine-api/machine-api-operator-755bb95488-jptld" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.251953 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tj9k5\" (UniqueName: \"kubernetes.io/projected/765dcc91-6c2c-4e8e-a85d-b083d0a8e261-kube-api-access-tj9k5\") pod \"machine-config-server-gpd7g\" (UID: \"765dcc91-6c2c-4e8e-a85d-b083d0a8e261\") " pod="openshift-machine-config-operator/machine-config-server-gpd7g" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.251973 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bcb987d-ffcc-436f-b7fb-75a4d2e4952d-serving-cert\") pod \"console-operator-67c89758df-67svm\" (UID: \"3bcb987d-ffcc-436f-b7fb-75a4d2e4952d\") " pod="openshift-console-operator/console-operator-67c89758df-67svm" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.251990 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4ba4d5a4-755b-4d6e-b250-cd705b244775-tmp\") pod \"marketplace-operator-547dbd544d-bjwbh\" (UID: \"4ba4d5a4-755b-4d6e-b250-cd705b244775\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bjwbh" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.252005 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/765dcc91-6c2c-4e8e-a85d-b083d0a8e261-certs\") pod \"machine-config-server-gpd7g\" (UID: \"765dcc91-6c2c-4e8e-a85d-b083d0a8e261\") " pod="openshift-machine-config-operator/machine-config-server-gpd7g" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.252024 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/49efa7f2-d990-4552-80cf-5d4a72a32ec7-metrics-certs\") pod \"router-default-68cf44c8b8-n929c\" (UID: \"49efa7f2-d990-4552-80cf-5d4a72a32ec7\") " pod="openshift-ingress/router-default-68cf44c8b8-n929c" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.252045 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g67j4\" (UniqueName: \"kubernetes.io/projected/3a884209-147f-4786-9a92-5eb1a685dc1b-kube-api-access-g67j4\") pod \"package-server-manager-77f986bd66-h4rkf\" (UID: \"3a884209-147f-4786-9a92-5eb1a685dc1b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-h4rkf" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.252068 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b4c71ca-e2f6-48d9-832a-1eca4fbcd9f3-config\") pod \"openshift-kube-scheduler-operator-54f497555d-c5l6l\" (UID: \"6b4c71ca-e2f6-48d9-832a-1eca4fbcd9f3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-c5l6l" Jan 26 00:11:24 crc kubenswrapper[5110]: E0126 00:11:24.252086 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:24.752062914 +0000 UTC m=+141.980961523 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.252117 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/497eadd3-187e-4dfb-82c3-fad0d59eb723-serviceca\") pod \"image-pruner-29489760-jjbnv\" (UID: \"497eadd3-187e-4dfb-82c3-fad0d59eb723\") " pod="openshift-image-registry/image-pruner-29489760-jjbnv" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.252138 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1cea4c4b-a35b-4b39-a268-15234da1bc4a-machine-approver-tls\") pod \"machine-approver-54c688565-tgtdw\" (UID: \"1cea4c4b-a35b-4b39-a268-15234da1bc4a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-tgtdw" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.252157 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/0be01a81-9d33-487a-abf9-0284a7b3f24b-signing-cabundle\") pod \"service-ca-74545575db-l6gvr\" (UID: \"0be01a81-9d33-487a-abf9-0284a7b3f24b\") " pod="openshift-service-ca/service-ca-74545575db-l6gvr" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.252189 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ce4b4a91-4053-4fb9-be7e-ac51e71a829e-tmpfs\") pod \"olm-operator-5cdf44d969-99xsn\" (UID: \"ce4b4a91-4053-4fb9-be7e-ac51e71a829e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-99xsn" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.252215 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/ddb3d5eb-9401-4032-9bcd-c2f798fbaf51-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-t49fp\" (UID: \"ddb3d5eb-9401-4032-9bcd-c2f798fbaf51\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t49fp" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.252241 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9cf38d4d-d415-4e81-85c3-bb2c65f7423b-images\") pod \"machine-config-operator-67c9d58cbb-4j4dd\" (UID: \"9cf38d4d-d415-4e81-85c3-bb2c65f7423b\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4j4dd" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.252633 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/24efc76d-eb4f-4036-804e-a71705eb1a78-cert\") pod \"ingress-canary-bqdwj\" (UID: \"24efc76d-eb4f-4036-804e-a71705eb1a78\") " pod="openshift-ingress-canary/ingress-canary-bqdwj" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.252659 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b4c71ca-e2f6-48d9-832a-1eca4fbcd9f3-config\") pod \"openshift-kube-scheduler-operator-54f497555d-c5l6l\" (UID: \"6b4c71ca-e2f6-48d9-832a-1eca4fbcd9f3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-c5l6l" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.252670 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6b4c71ca-e2f6-48d9-832a-1eca4fbcd9f3-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-c5l6l\" (UID: \"6b4c71ca-e2f6-48d9-832a-1eca4fbcd9f3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-c5l6l" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.252690 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6b4c71ca-e2f6-48d9-832a-1eca4fbcd9f3-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-c5l6l\" (UID: \"6b4c71ca-e2f6-48d9-832a-1eca4fbcd9f3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-c5l6l" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.252715 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e17b81d4-9c4f-44ee-acaa-e008890ca2fd-srv-cert\") pod \"catalog-operator-75ff9f647d-pcrzq\" (UID: \"e17b81d4-9c4f-44ee-acaa-e008890ca2fd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-pcrzq" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.252744 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-79nbg\" (UniqueName: \"kubernetes.io/projected/16e64d17-7476-4caf-af16-73d93e2c1085-kube-api-access-79nbg\") pod \"packageserver-7d4fc7d867-7fnwf\" (UID: \"16e64d17-7476-4caf-af16-73d93e2c1085\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-7fnwf" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.252774 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6kl9\" (UniqueName: \"kubernetes.io/projected/8c8a6437-b0cb-4825-999f-6e523fd394e9-kube-api-access-z6kl9\") pod \"cni-sysctl-allowlist-ds-hvgtl\" (UID: \"8c8a6437-b0cb-4825-999f-6e523fd394e9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvgtl" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.253469 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.253632 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.256149 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6b4c71ca-e2f6-48d9-832a-1eca4fbcd9f3-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-c5l6l\" (UID: \"6b4c71ca-e2f6-48d9-832a-1eca4fbcd9f3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-c5l6l" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.256295 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1eb870e-2fb9-4e38-94b6-80fb61ed6ad3-config\") pod \"service-ca-operator-5b9c976747-2dt85\" (UID: \"d1eb870e-2fb9-4e38-94b6-80fb61ed6ad3\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2dt85" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.256322 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4464550b-e8e5-4f07-8e54-40eb792eb201-socket-dir\") pod \"csi-hostpathplugin-pzlvv\" (UID: \"4464550b-e8e5-4f07-8e54-40eb792eb201\") " pod="hostpath-provisioner/csi-hostpathplugin-pzlvv" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.256339 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/16e64d17-7476-4caf-af16-73d93e2c1085-apiservice-cert\") pod \"packageserver-7d4fc7d867-7fnwf\" (UID: \"16e64d17-7476-4caf-af16-73d93e2c1085\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-7fnwf" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.256365 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hqdts\" (UniqueName: \"kubernetes.io/projected/49efa7f2-d990-4552-80cf-5d4a72a32ec7-kube-api-access-hqdts\") pod \"router-default-68cf44c8b8-n929c\" (UID: \"49efa7f2-d990-4552-80cf-5d4a72a32ec7\") " pod="openshift-ingress/router-default-68cf44c8b8-n929c" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.256382 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bcb987d-ffcc-436f-b7fb-75a4d2e4952d-config\") pod \"console-operator-67c89758df-67svm\" (UID: \"3bcb987d-ffcc-436f-b7fb-75a4d2e4952d\") " pod="openshift-console-operator/console-operator-67c89758df-67svm" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.256406 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e17b81d4-9c4f-44ee-acaa-e008890ca2fd-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-pcrzq\" (UID: \"e17b81d4-9c4f-44ee-acaa-e008890ca2fd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-pcrzq" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.256422 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ce4b4a91-4053-4fb9-be7e-ac51e71a829e-srv-cert\") pod \"olm-operator-5cdf44d969-99xsn\" (UID: \"ce4b4a91-4053-4fb9-be7e-ac51e71a829e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-99xsn" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.256440 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8c8a6437-b0cb-4825-999f-6e523fd394e9-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-hvgtl\" (UID: \"8c8a6437-b0cb-4825-999f-6e523fd394e9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvgtl" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.256459 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kbpsj\" (UniqueName: \"kubernetes.io/projected/0be01a81-9d33-487a-abf9-0284a7b3f24b-kube-api-access-kbpsj\") pod \"service-ca-74545575db-l6gvr\" (UID: \"0be01a81-9d33-487a-abf9-0284a7b3f24b\") " pod="openshift-service-ca/service-ca-74545575db-l6gvr" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.256477 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k5fxf\" (UniqueName: \"kubernetes.io/projected/77aa983a-f3c1-4799-a84b-c3c7a381a1bc-kube-api-access-k5fxf\") pod \"control-plane-machine-set-operator-75ffdb6fcd-crnjv\" (UID: \"77aa983a-f3c1-4799-a84b-c3c7a381a1bc\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-crnjv" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.256500 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e17b81d4-9c4f-44ee-acaa-e008890ca2fd-tmpfs\") pod \"catalog-operator-75ff9f647d-pcrzq\" (UID: \"e17b81d4-9c4f-44ee-acaa-e008890ca2fd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-pcrzq" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.256496 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ce4b4a91-4053-4fb9-be7e-ac51e71a829e-tmpfs\") pod \"olm-operator-5cdf44d969-99xsn\" (UID: \"ce4b4a91-4053-4fb9-be7e-ac51e71a829e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-99xsn" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.256517 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9cf38d4d-d415-4e81-85c3-bb2c65f7423b-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-4j4dd\" (UID: \"9cf38d4d-d415-4e81-85c3-bb2c65f7423b\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4j4dd" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.256652 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/4464550b-e8e5-4f07-8e54-40eb792eb201-mountpoint-dir\") pod \"csi-hostpathplugin-pzlvv\" (UID: \"4464550b-e8e5-4f07-8e54-40eb792eb201\") " pod="hostpath-provisioner/csi-hostpathplugin-pzlvv" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.256680 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5f875209-3895-465c-9992-37ef91c4dda9-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-55st2\" (UID: \"5f875209-3895-465c-9992-37ef91c4dda9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-55st2" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.256718 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/16e64d17-7476-4caf-af16-73d93e2c1085-tmpfs\") pod \"packageserver-7d4fc7d867-7fnwf\" (UID: \"16e64d17-7476-4caf-af16-73d93e2c1085\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-7fnwf" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.256740 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3a884209-147f-4786-9a92-5eb1a685dc1b-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-h4rkf\" (UID: \"3a884209-147f-4786-9a92-5eb1a685dc1b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-h4rkf" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.256754 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9cf38d4d-d415-4e81-85c3-bb2c65f7423b-images\") pod \"machine-config-operator-67c9d58cbb-4j4dd\" (UID: \"9cf38d4d-d415-4e81-85c3-bb2c65f7423b\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4j4dd" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.256768 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cq9mh\" (UniqueName: \"kubernetes.io/projected/e17b81d4-9c4f-44ee-acaa-e008890ca2fd-kube-api-access-cq9mh\") pod \"catalog-operator-75ff9f647d-pcrzq\" (UID: \"e17b81d4-9c4f-44ee-acaa-e008890ca2fd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-pcrzq" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.257064 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4ba4d5a4-755b-4d6e-b250-cd705b244775-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-bjwbh\" (UID: \"4ba4d5a4-755b-4d6e-b250-cd705b244775\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bjwbh" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.257109 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4464550b-e8e5-4f07-8e54-40eb792eb201-registration-dir\") pod \"csi-hostpathplugin-pzlvv\" (UID: \"4464550b-e8e5-4f07-8e54-40eb792eb201\") " pod="hostpath-provisioner/csi-hostpathplugin-pzlvv" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.257155 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sklkw\" (UniqueName: \"kubernetes.io/projected/3bcb987d-ffcc-436f-b7fb-75a4d2e4952d-kube-api-access-sklkw\") pod \"console-operator-67c89758df-67svm\" (UID: \"3bcb987d-ffcc-436f-b7fb-75a4d2e4952d\") " pod="openshift-console-operator/console-operator-67c89758df-67svm" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.257161 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4ba4d5a4-755b-4d6e-b250-cd705b244775-tmp\") pod \"marketplace-operator-547dbd544d-bjwbh\" (UID: \"4ba4d5a4-755b-4d6e-b250-cd705b244775\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bjwbh" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.257180 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/49efa7f2-d990-4552-80cf-5d4a72a32ec7-default-certificate\") pod \"router-default-68cf44c8b8-n929c\" (UID: \"49efa7f2-d990-4552-80cf-5d4a72a32ec7\") " pod="openshift-ingress/router-default-68cf44c8b8-n929c" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.257202 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49efa7f2-d990-4552-80cf-5d4a72a32ec7-service-ca-bundle\") pod \"router-default-68cf44c8b8-n929c\" (UID: \"49efa7f2-d990-4552-80cf-5d4a72a32ec7\") " pod="openshift-ingress/router-default-68cf44c8b8-n929c" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.257227 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-db2dq\" (UniqueName: \"kubernetes.io/projected/d7cb0851-2110-44f5-b3fc-77478d1fd49f-kube-api-access-db2dq\") pod \"multus-admission-controller-69db94689b-g57zz\" (UID: \"d7cb0851-2110-44f5-b3fc-77478d1fd49f\") " pod="openshift-multus/multus-admission-controller-69db94689b-g57zz" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.257247 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/4464550b-e8e5-4f07-8e54-40eb792eb201-mountpoint-dir\") pod \"csi-hostpathplugin-pzlvv\" (UID: \"4464550b-e8e5-4f07-8e54-40eb792eb201\") " pod="hostpath-provisioner/csi-hostpathplugin-pzlvv" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.257249 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5f875209-3895-465c-9992-37ef91c4dda9-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-55st2\" (UID: \"5f875209-3895-465c-9992-37ef91c4dda9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-55st2" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.257297 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bvhfg\" (UniqueName: \"kubernetes.io/projected/10d2b7d6-0a18-4f5c-b3b8-2c6f34e88730-kube-api-access-bvhfg\") pod \"machine-api-operator-755bb95488-jptld\" (UID: \"10d2b7d6-0a18-4f5c-b3b8-2c6f34e88730\") " pod="openshift-machine-api/machine-api-operator-755bb95488-jptld" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.257323 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/0be01a81-9d33-487a-abf9-0284a7b3f24b-signing-key\") pod \"service-ca-74545575db-l6gvr\" (UID: \"0be01a81-9d33-487a-abf9-0284a7b3f24b\") " pod="openshift-service-ca/service-ca-74545575db-l6gvr" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.257359 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b4c71ca-e2f6-48d9-832a-1eca4fbcd9f3-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-c5l6l\" (UID: \"6b4c71ca-e2f6-48d9-832a-1eca4fbcd9f3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-c5l6l" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.257385 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/10d2b7d6-0a18-4f5c-b3b8-2c6f34e88730-images\") pod \"machine-api-operator-755bb95488-jptld\" (UID: \"10d2b7d6-0a18-4f5c-b3b8-2c6f34e88730\") " pod="openshift-machine-api/machine-api-operator-755bb95488-jptld" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.257421 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/8c8a6437-b0cb-4825-999f-6e523fd394e9-ready\") pod \"cni-sysctl-allowlist-ds-hvgtl\" (UID: \"8c8a6437-b0cb-4825-999f-6e523fd394e9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvgtl" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.257445 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/49efa7f2-d990-4552-80cf-5d4a72a32ec7-stats-auth\") pod \"router-default-68cf44c8b8-n929c\" (UID: \"49efa7f2-d990-4552-80cf-5d4a72a32ec7\") " pod="openshift-ingress/router-default-68cf44c8b8-n929c" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.257475 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b4jkm\" (UniqueName: \"kubernetes.io/projected/ddb3d5eb-9401-4032-9bcd-c2f798fbaf51-kube-api-access-b4jkm\") pod \"cluster-image-registry-operator-86c45576b9-t49fp\" (UID: \"ddb3d5eb-9401-4032-9bcd-c2f798fbaf51\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t49fp" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.257499 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.257519 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ddb3d5eb-9401-4032-9bcd-c2f798fbaf51-tmp\") pod \"cluster-image-registry-operator-86c45576b9-t49fp\" (UID: \"ddb3d5eb-9401-4032-9bcd-c2f798fbaf51\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t49fp" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.257546 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4ba4d5a4-755b-4d6e-b250-cd705b244775-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-bjwbh\" (UID: \"4ba4d5a4-755b-4d6e-b250-cd705b244775\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bjwbh" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.257568 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mhpqv\" (UniqueName: \"kubernetes.io/projected/4ba4d5a4-755b-4d6e-b250-cd705b244775-kube-api-access-mhpqv\") pod \"marketplace-operator-547dbd544d-bjwbh\" (UID: \"4ba4d5a4-755b-4d6e-b250-cd705b244775\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bjwbh" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.257587 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ddb3d5eb-9401-4032-9bcd-c2f798fbaf51-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-t49fp\" (UID: \"ddb3d5eb-9401-4032-9bcd-c2f798fbaf51\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t49fp" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.257606 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8c8a6437-b0cb-4825-999f-6e523fd394e9-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-hvgtl\" (UID: \"8c8a6437-b0cb-4825-999f-6e523fd394e9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvgtl" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.257641 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t6dh9\" (UniqueName: \"kubernetes.io/projected/5f875209-3895-465c-9992-37ef91c4dda9-kube-api-access-t6dh9\") pod \"machine-config-controller-f9cdd68f7-55st2\" (UID: \"5f875209-3895-465c-9992-37ef91c4dda9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-55st2" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.257660 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/16e64d17-7476-4caf-af16-73d93e2c1085-webhook-cert\") pod \"packageserver-7d4fc7d867-7fnwf\" (UID: \"16e64d17-7476-4caf-af16-73d93e2c1085\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-7fnwf" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.257675 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/4464550b-e8e5-4f07-8e54-40eb792eb201-csi-data-dir\") pod \"csi-hostpathplugin-pzlvv\" (UID: \"4464550b-e8e5-4f07-8e54-40eb792eb201\") " pod="hostpath-provisioner/csi-hostpathplugin-pzlvv" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.257697 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b6m5r\" (UniqueName: \"kubernetes.io/projected/497eadd3-187e-4dfb-82c3-fad0d59eb723-kube-api-access-b6m5r\") pod \"image-pruner-29489760-jjbnv\" (UID: \"497eadd3-187e-4dfb-82c3-fad0d59eb723\") " pod="openshift-image-registry/image-pruner-29489760-jjbnv" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.257722 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/4464550b-e8e5-4f07-8e54-40eb792eb201-plugins-dir\") pod \"csi-hostpathplugin-pzlvv\" (UID: \"4464550b-e8e5-4f07-8e54-40eb792eb201\") " pod="hostpath-provisioner/csi-hostpathplugin-pzlvv" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.257739 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ddb3d5eb-9401-4032-9bcd-c2f798fbaf51-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-t49fp\" (UID: \"ddb3d5eb-9401-4032-9bcd-c2f798fbaf51\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t49fp" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.257763 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3bcb987d-ffcc-436f-b7fb-75a4d2e4952d-trusted-ca\") pod \"console-operator-67c89758df-67svm\" (UID: \"3bcb987d-ffcc-436f-b7fb-75a4d2e4952d\") " pod="openshift-console-operator/console-operator-67c89758df-67svm" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.257790 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1eb870e-2fb9-4e38-94b6-80fb61ed6ad3-serving-cert\") pod \"service-ca-operator-5b9c976747-2dt85\" (UID: \"d1eb870e-2fb9-4e38-94b6-80fb61ed6ad3\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2dt85" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.257836 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gfnsx\" (UniqueName: \"kubernetes.io/projected/d1eb870e-2fb9-4e38-94b6-80fb61ed6ad3-kube-api-access-gfnsx\") pod \"service-ca-operator-5b9c976747-2dt85\" (UID: \"d1eb870e-2fb9-4e38-94b6-80fb61ed6ad3\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2dt85" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.257853 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1cea4c4b-a35b-4b39-a268-15234da1bc4a-auth-proxy-config\") pod \"machine-approver-54c688565-tgtdw\" (UID: \"1cea4c4b-a35b-4b39-a268-15234da1bc4a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-tgtdw" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.257874 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9b424\" (UniqueName: \"kubernetes.io/projected/9cf38d4d-d415-4e81-85c3-bb2c65f7423b-kube-api-access-9b424\") pod \"machine-config-operator-67c9d58cbb-4j4dd\" (UID: \"9cf38d4d-d415-4e81-85c3-bb2c65f7423b\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4j4dd" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.257893 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ce4b4a91-4053-4fb9-be7e-ac51e71a829e-profile-collector-cert\") pod \"olm-operator-5cdf44d969-99xsn\" (UID: \"ce4b4a91-4053-4fb9-be7e-ac51e71a829e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-99xsn" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.257912 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p894f\" (UniqueName: \"kubernetes.io/projected/ce4b4a91-4053-4fb9-be7e-ac51e71a829e-kube-api-access-p894f\") pod \"olm-operator-5cdf44d969-99xsn\" (UID: \"ce4b4a91-4053-4fb9-be7e-ac51e71a829e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-99xsn" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.257931 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d7cb0851-2110-44f5-b3fc-77478d1fd49f-webhook-certs\") pod \"multus-admission-controller-69db94689b-g57zz\" (UID: \"d7cb0851-2110-44f5-b3fc-77478d1fd49f\") " pod="openshift-multus/multus-admission-controller-69db94689b-g57zz" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.257960 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ddb3d5eb-9401-4032-9bcd-c2f798fbaf51-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-t49fp\" (UID: \"ddb3d5eb-9401-4032-9bcd-c2f798fbaf51\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t49fp" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.257981 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cea4c4b-a35b-4b39-a268-15234da1bc4a-config\") pod \"machine-approver-54c688565-tgtdw\" (UID: \"1cea4c4b-a35b-4b39-a268-15234da1bc4a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-tgtdw" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.258002 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-24hmk\" (UniqueName: \"kubernetes.io/projected/4464550b-e8e5-4f07-8e54-40eb792eb201-kube-api-access-24hmk\") pod \"csi-hostpathplugin-pzlvv\" (UID: \"4464550b-e8e5-4f07-8e54-40eb792eb201\") " pod="hostpath-provisioner/csi-hostpathplugin-pzlvv" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.258032 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9cf38d4d-d415-4e81-85c3-bb2c65f7423b-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-4j4dd\" (UID: \"9cf38d4d-d415-4e81-85c3-bb2c65f7423b\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4j4dd" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.258060 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/765dcc91-6c2c-4e8e-a85d-b083d0a8e261-node-bootstrap-token\") pod \"machine-config-server-gpd7g\" (UID: \"765dcc91-6c2c-4e8e-a85d-b083d0a8e261\") " pod="openshift-machine-config-operator/machine-config-server-gpd7g" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.258081 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10d2b7d6-0a18-4f5c-b3b8-2c6f34e88730-config\") pod \"machine-api-operator-755bb95488-jptld\" (UID: \"10d2b7d6-0a18-4f5c-b3b8-2c6f34e88730\") " pod="openshift-machine-api/machine-api-operator-755bb95488-jptld" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.258106 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6kjn2\" (UniqueName: \"kubernetes.io/projected/1cea4c4b-a35b-4b39-a268-15234da1bc4a-kube-api-access-6kjn2\") pod \"machine-approver-54c688565-tgtdw\" (UID: \"1cea4c4b-a35b-4b39-a268-15234da1bc4a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-tgtdw" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.258125 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/77aa983a-f3c1-4799-a84b-c3c7a381a1bc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-crnjv\" (UID: \"77aa983a-f3c1-4799-a84b-c3c7a381a1bc\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-crnjv" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.258158 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8xsx\" (UniqueName: \"kubernetes.io/projected/24efc76d-eb4f-4036-804e-a71705eb1a78-kube-api-access-w8xsx\") pod \"ingress-canary-bqdwj\" (UID: \"24efc76d-eb4f-4036-804e-a71705eb1a78\") " pod="openshift-ingress-canary/ingress-canary-bqdwj" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.258950 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bcb987d-ffcc-436f-b7fb-75a4d2e4952d-config\") pod \"console-operator-67c89758df-67svm\" (UID: \"3bcb987d-ffcc-436f-b7fb-75a4d2e4952d\") " pod="openshift-console-operator/console-operator-67c89758df-67svm" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.258975 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5f875209-3895-465c-9992-37ef91c4dda9-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-55st2\" (UID: \"5f875209-3895-465c-9992-37ef91c4dda9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-55st2" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.259015 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/ddb3d5eb-9401-4032-9bcd-c2f798fbaf51-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-t49fp\" (UID: \"ddb3d5eb-9401-4032-9bcd-c2f798fbaf51\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t49fp" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.259182 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4464550b-e8e5-4f07-8e54-40eb792eb201-socket-dir\") pod \"csi-hostpathplugin-pzlvv\" (UID: \"4464550b-e8e5-4f07-8e54-40eb792eb201\") " pod="hostpath-provisioner/csi-hostpathplugin-pzlvv" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.259299 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/16e64d17-7476-4caf-af16-73d93e2c1085-tmpfs\") pod \"packageserver-7d4fc7d867-7fnwf\" (UID: \"16e64d17-7476-4caf-af16-73d93e2c1085\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-7fnwf" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.259348 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/4464550b-e8e5-4f07-8e54-40eb792eb201-plugins-dir\") pod \"csi-hostpathplugin-pzlvv\" (UID: \"4464550b-e8e5-4f07-8e54-40eb792eb201\") " pod="hostpath-provisioner/csi-hostpathplugin-pzlvv" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.263321 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bcb987d-ffcc-436f-b7fb-75a4d2e4952d-serving-cert\") pod \"console-operator-67c89758df-67svm\" (UID: \"3bcb987d-ffcc-436f-b7fb-75a4d2e4952d\") " pod="openshift-console-operator/console-operator-67c89758df-67svm" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.263858 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1cea4c4b-a35b-4b39-a268-15234da1bc4a-machine-approver-tls\") pod \"machine-approver-54c688565-tgtdw\" (UID: \"1cea4c4b-a35b-4b39-a268-15234da1bc4a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-tgtdw" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.263898 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3bcb987d-ffcc-436f-b7fb-75a4d2e4952d-trusted-ca\") pod \"console-operator-67c89758df-67svm\" (UID: \"3bcb987d-ffcc-436f-b7fb-75a4d2e4952d\") " pod="openshift-console-operator/console-operator-67c89758df-67svm" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.267072 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9cf38d4d-d415-4e81-85c3-bb2c65f7423b-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-4j4dd\" (UID: \"9cf38d4d-d415-4e81-85c3-bb2c65f7423b\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4j4dd" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.275340 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9cf38d4d-d415-4e81-85c3-bb2c65f7423b-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-4j4dd\" (UID: \"9cf38d4d-d415-4e81-85c3-bb2c65f7423b\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4j4dd" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.369191 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e17b81d4-9c4f-44ee-acaa-e008890ca2fd-tmpfs\") pod \"catalog-operator-75ff9f647d-pcrzq\" (UID: \"e17b81d4-9c4f-44ee-acaa-e008890ca2fd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-pcrzq" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.371184 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ddb3d5eb-9401-4032-9bcd-c2f798fbaf51-tmp\") pod \"cluster-image-registry-operator-86c45576b9-t49fp\" (UID: \"ddb3d5eb-9401-4032-9bcd-c2f798fbaf51\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t49fp" Jan 26 00:11:24 crc kubenswrapper[5110]: E0126 00:11:24.371642 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:24.871621531 +0000 UTC m=+142.100520140 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.257676 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4464550b-e8e5-4f07-8e54-40eb792eb201-registration-dir\") pod \"csi-hostpathplugin-pzlvv\" (UID: \"4464550b-e8e5-4f07-8e54-40eb792eb201\") " pod="hostpath-provisioner/csi-hostpathplugin-pzlvv" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.372285 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/4464550b-e8e5-4f07-8e54-40eb792eb201-csi-data-dir\") pod \"csi-hostpathplugin-pzlvv\" (UID: \"4464550b-e8e5-4f07-8e54-40eb792eb201\") " pod="hostpath-provisioner/csi-hostpathplugin-pzlvv" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.372858 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1cea4c4b-a35b-4b39-a268-15234da1bc4a-auth-proxy-config\") pod \"machine-approver-54c688565-tgtdw\" (UID: \"1cea4c4b-a35b-4b39-a268-15234da1bc4a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-tgtdw" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.373846 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.373912 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ddb3d5eb-9401-4032-9bcd-c2f798fbaf51-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-t49fp\" (UID: \"ddb3d5eb-9401-4032-9bcd-c2f798fbaf51\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t49fp" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.374061 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w8xsx\" (UniqueName: \"kubernetes.io/projected/24efc76d-eb4f-4036-804e-a71705eb1a78-kube-api-access-w8xsx\") pod \"ingress-canary-bqdwj\" (UID: \"24efc76d-eb4f-4036-804e-a71705eb1a78\") " pod="openshift-ingress-canary/ingress-canary-bqdwj" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.374143 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/24efc76d-eb4f-4036-804e-a71705eb1a78-cert\") pod \"ingress-canary-bqdwj\" (UID: \"24efc76d-eb4f-4036-804e-a71705eb1a78\") " pod="openshift-ingress-canary/ingress-canary-bqdwj" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.374176 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z6kl9\" (UniqueName: \"kubernetes.io/projected/8c8a6437-b0cb-4825-999f-6e523fd394e9-kube-api-access-z6kl9\") pod \"cni-sysctl-allowlist-ds-hvgtl\" (UID: \"8c8a6437-b0cb-4825-999f-6e523fd394e9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvgtl" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.374213 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8c8a6437-b0cb-4825-999f-6e523fd394e9-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-hvgtl\" (UID: \"8c8a6437-b0cb-4825-999f-6e523fd394e9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvgtl" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.374328 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/8c8a6437-b0cb-4825-999f-6e523fd394e9-ready\") pod \"cni-sysctl-allowlist-ds-hvgtl\" (UID: \"8c8a6437-b0cb-4825-999f-6e523fd394e9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvgtl" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.374375 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8c8a6437-b0cb-4825-999f-6e523fd394e9-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-hvgtl\" (UID: \"8c8a6437-b0cb-4825-999f-6e523fd394e9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvgtl" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.374496 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8c8a6437-b0cb-4825-999f-6e523fd394e9-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-hvgtl\" (UID: \"8c8a6437-b0cb-4825-999f-6e523fd394e9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvgtl" Jan 26 00:11:24 crc kubenswrapper[5110]: E0126 00:11:24.374553 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:24.874544695 +0000 UTC m=+142.103443304 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.375097 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/8c8a6437-b0cb-4825-999f-6e523fd394e9-ready\") pod \"cni-sysctl-allowlist-ds-hvgtl\" (UID: \"8c8a6437-b0cb-4825-999f-6e523fd394e9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvgtl" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.375167 5110 request.go:752] "Waited before sending request" delay="1.035967113s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dauthentication-operator-config&limit=500&resourceVersion=0" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.378180 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/925904bd-cb57-4356-a6df-e72bd716398b-serving-cert\") pod \"authentication-operator-7f5c659b84-qknx7\" (UID: \"925904bd-cb57-4356-a6df-e72bd716398b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-qknx7" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.378446 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/497eadd3-187e-4dfb-82c3-fad0d59eb723-serviceca\") pod \"image-pruner-29489760-jjbnv\" (UID: \"497eadd3-187e-4dfb-82c3-fad0d59eb723\") " pod="openshift-image-registry/image-pruner-29489760-jjbnv" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.380182 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ddb3d5eb-9401-4032-9bcd-c2f798fbaf51-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-t49fp\" (UID: \"ddb3d5eb-9401-4032-9bcd-c2f798fbaf51\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t49fp" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.415607 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.417254 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.417907 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.418061 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.418401 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.418539 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.419864 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.422519 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/925904bd-cb57-4356-a6df-e72bd716398b-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-qknx7\" (UID: \"925904bd-cb57-4356-a6df-e72bd716398b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-qknx7" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.423577 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/925904bd-cb57-4356-a6df-e72bd716398b-config\") pod \"authentication-operator-7f5c659b84-qknx7\" (UID: \"925904bd-cb57-4356-a6df-e72bd716398b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-qknx7" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.425404 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/defa6a4f-7d53-4d4c-b0c7-cfb7489db02c-secret-volume\") pod \"collect-profiles-29489760-vd2k2\" (UID: \"defa6a4f-7d53-4d4c-b0c7-cfb7489db02c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-vd2k2" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.438632 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e17b81d4-9c4f-44ee-acaa-e008890ca2fd-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-pcrzq\" (UID: \"e17b81d4-9c4f-44ee-acaa-e008890ca2fd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-pcrzq" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.439728 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.441349 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/925904bd-cb57-4356-a6df-e72bd716398b-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-qknx7\" (UID: \"925904bd-cb57-4356-a6df-e72bd716398b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-qknx7" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.442017 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e17b81d4-9c4f-44ee-acaa-e008890ca2fd-srv-cert\") pod \"catalog-operator-75ff9f647d-pcrzq\" (UID: \"e17b81d4-9c4f-44ee-acaa-e008890ca2fd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-pcrzq" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.462738 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-grskb\" (UniqueName: \"kubernetes.io/projected/9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb-kube-api-access-grskb\") pod \"controller-manager-65b6cccf98-8ssgr\" (UID: \"9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8ssgr" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.466977 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5n6n\" (UniqueName: \"kubernetes.io/projected/aaaae9da-7fe6-47c9-8a79-20343d5f6c43-kube-api-access-r5n6n\") pod \"dns-operator-799b87ffcd-cg4nl\" (UID: \"aaaae9da-7fe6-47c9-8a79-20343d5f6c43\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-cg4nl" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.468917 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.476223 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:24 crc kubenswrapper[5110]: E0126 00:11:24.476695 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:24.976674288 +0000 UTC m=+142.205572897 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.483481 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d7cb0851-2110-44f5-b3fc-77478d1fd49f-webhook-certs\") pod \"multus-admission-controller-69db94689b-g57zz\" (UID: \"d7cb0851-2110-44f5-b3fc-77478d1fd49f\") " pod="openshift-multus/multus-admission-controller-69db94689b-g57zz" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.486184 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.494285 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10d2b7d6-0a18-4f5c-b3b8-2c6f34e88730-config\") pod \"machine-api-operator-755bb95488-jptld\" (UID: \"10d2b7d6-0a18-4f5c-b3b8-2c6f34e88730\") " pod="openshift-machine-api/machine-api-operator-755bb95488-jptld" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.501339 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cea4c4b-a35b-4b39-a268-15234da1bc4a-config\") pod \"machine-approver-54c688565-tgtdw\" (UID: \"1cea4c4b-a35b-4b39-a268-15234da1bc4a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-tgtdw" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.504769 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/77aa983a-f3c1-4799-a84b-c3c7a381a1bc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-crnjv\" (UID: \"77aa983a-f3c1-4799-a84b-c3c7a381a1bc\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-crnjv" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.507227 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.509519 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ce4b4a91-4053-4fb9-be7e-ac51e71a829e-profile-collector-cert\") pod \"olm-operator-5cdf44d969-99xsn\" (UID: \"ce4b4a91-4053-4fb9-be7e-ac51e71a829e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-99xsn" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.520413 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b4c71ca-e2f6-48d9-832a-1eca4fbcd9f3-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-c5l6l\" (UID: \"6b4c71ca-e2f6-48d9-832a-1eca4fbcd9f3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-c5l6l" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.527654 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.539419 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/10d2b7d6-0a18-4f5c-b3b8-2c6f34e88730-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-jptld\" (UID: \"10d2b7d6-0a18-4f5c-b3b8-2c6f34e88730\") " pod="openshift-machine-api/machine-api-operator-755bb95488-jptld" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.562294 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.567896 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/10d2b7d6-0a18-4f5c-b3b8-2c6f34e88730-images\") pod \"machine-api-operator-755bb95488-jptld\" (UID: \"10d2b7d6-0a18-4f5c-b3b8-2c6f34e88730\") " pod="openshift-machine-api/machine-api-operator-755bb95488-jptld" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.568451 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.577771 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:24 crc kubenswrapper[5110]: E0126 00:11:24.578279 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:25.078264536 +0000 UTC m=+142.307163145 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.587683 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.590522 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-8ssgr" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.605761 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.625516 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.633290 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/16e64d17-7476-4caf-af16-73d93e2c1085-apiservice-cert\") pod \"packageserver-7d4fc7d867-7fnwf\" (UID: \"16e64d17-7476-4caf-af16-73d93e2c1085\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-7fnwf" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.640753 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/16e64d17-7476-4caf-af16-73d93e2c1085-webhook-cert\") pod \"packageserver-7d4fc7d867-7fnwf\" (UID: \"16e64d17-7476-4caf-af16-73d93e2c1085\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-7fnwf" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.646692 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.664767 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-cg4nl" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.680183 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.681715 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 26 00:11:24 crc kubenswrapper[5110]: E0126 00:11:24.682269 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:25.182255692 +0000 UTC m=+142.411154301 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.686697 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.694496 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/49efa7f2-d990-4552-80cf-5d4a72a32ec7-default-certificate\") pod \"router-default-68cf44c8b8-n929c\" (UID: \"49efa7f2-d990-4552-80cf-5d4a72a32ec7\") " pod="openshift-ingress/router-default-68cf44c8b8-n929c" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.710829 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/49efa7f2-d990-4552-80cf-5d4a72a32ec7-stats-auth\") pod \"router-default-68cf44c8b8-n929c\" (UID: \"49efa7f2-d990-4552-80cf-5d4a72a32ec7\") " pod="openshift-ingress/router-default-68cf44c8b8-n929c" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.713868 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.730492 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-qdwb2"] Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.730686 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.752949 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.756757 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-ngcjz"] Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.767204 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.770422 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49efa7f2-d990-4552-80cf-5d4a72a32ec7-service-ca-bundle\") pod \"router-default-68cf44c8b8-n929c\" (UID: \"49efa7f2-d990-4552-80cf-5d4a72a32ec7\") " pod="openshift-ingress/router-default-68cf44c8b8-n929c" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.774774 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/49efa7f2-d990-4552-80cf-5d4a72a32ec7-metrics-certs\") pod \"router-default-68cf44c8b8-n929c\" (UID: \"49efa7f2-d990-4552-80cf-5d4a72a32ec7\") " pod="openshift-ingress/router-default-68cf44c8b8-n929c" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.783623 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-2gvm5"] Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.783896 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:24 crc kubenswrapper[5110]: E0126 00:11:24.784970 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:25.284945141 +0000 UTC m=+142.513843750 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.786212 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:24 crc kubenswrapper[5110]: E0126 00:11:24.786713 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:25.286703072 +0000 UTC m=+142.515601681 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.805313 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-5jp8q"] Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.808666 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.817622 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3a884209-147f-4786-9a92-5eb1a685dc1b-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-h4rkf\" (UID: \"3a884209-147f-4786-9a92-5eb1a685dc1b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-h4rkf" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.818254 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-tjq6l"] Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.827292 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.833941 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5f875209-3895-465c-9992-37ef91c4dda9-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-55st2\" (UID: \"5f875209-3895-465c-9992-37ef91c4dda9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-55st2" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.848333 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.856526 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-7plfs"] Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.862089 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-46ljf"] Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.882780 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.887760 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.888388 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:24 crc kubenswrapper[5110]: E0126 00:11:24.889229 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:25.389202625 +0000 UTC m=+142.618101234 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:24 crc kubenswrapper[5110]: W0126 00:11:24.892383 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6314e04a_09b6_48f6_950c_271199f2f803.slice/crio-d7437afdb169de1da651c2f801a697784ecbde6853e5cb5372bf3d1a9c152a4f WatchSource:0}: Error finding container d7437afdb169de1da651c2f801a697784ecbde6853e5cb5372bf3d1a9c152a4f: Status 404 returned error can't find the container with id d7437afdb169de1da651c2f801a697784ecbde6853e5cb5372bf3d1a9c152a4f Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.908014 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.908590 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-hsvjl"] Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.914462 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/0be01a81-9d33-487a-abf9-0284a7b3f24b-signing-key\") pod \"service-ca-74545575db-l6gvr\" (UID: \"0be01a81-9d33-487a-abf9-0284a7b3f24b\") " pod="openshift-service-ca/service-ca-74545575db-l6gvr" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.927373 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.934843 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/0be01a81-9d33-487a-abf9-0284a7b3f24b-signing-cabundle\") pod \"service-ca-74545575db-l6gvr\" (UID: \"0be01a81-9d33-487a-abf9-0284a7b3f24b\") " pod="openshift-service-ca/service-ca-74545575db-l6gvr" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.937738 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bfssk"] Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.945835 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.965524 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.966780 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-tg86c"] Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.986021 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.992868 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:24 crc kubenswrapper[5110]: E0126 00:11:24.993764 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:25.493736027 +0000 UTC m=+142.722634636 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.993788 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-8ssgr"] Jan 26 00:11:24 crc kubenswrapper[5110]: I0126 00:11:24.995102 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/defa6a4f-7d53-4d4c-b0c7-cfb7489db02c-config-volume\") pod \"collect-profiles-29489760-vd2k2\" (UID: \"defa6a4f-7d53-4d4c-b0c7-cfb7489db02c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-vd2k2" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.006299 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 26 00:11:25 crc kubenswrapper[5110]: W0126 00:11:25.008154 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c6c5e03_641e_4e9a_9d24_8b4565f9bdbb.slice/crio-d5c7431865f8c6d06a8fa85405b20def9940d3d3b870449882766b19a541cb0c WatchSource:0}: Error finding container d5c7431865f8c6d06a8fa85405b20def9940d3d3b870449882766b19a541cb0c: Status 404 returned error can't find the container with id d5c7431865f8c6d06a8fa85405b20def9940d3d3b870449882766b19a541cb0c Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.013037 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4ba4d5a4-755b-4d6e-b250-cd705b244775-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-bjwbh\" (UID: \"4ba4d5a4-755b-4d6e-b250-cd705b244775\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bjwbh" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.025583 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.037387 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-cg4nl"] Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.045685 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 26 00:11:25 crc kubenswrapper[5110]: W0126 00:11:25.048528 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaaaae9da_7fe6_47c9_8a79_20343d5f6c43.slice/crio-3c114895dd5592e4295e0bd8d5eab116b4eb74f550e7e7f754f53934a3c73dd2 WatchSource:0}: Error finding container 3c114895dd5592e4295e0bd8d5eab116b4eb74f550e7e7f754f53934a3c73dd2: Status 404 returned error can't find the container with id 3c114895dd5592e4295e0bd8d5eab116b4eb74f550e7e7f754f53934a3c73dd2 Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.066089 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.095022 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.095054 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:25 crc kubenswrapper[5110]: E0126 00:11:25.095343 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:25.595307164 +0000 UTC m=+142.824205773 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.099399 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:25 crc kubenswrapper[5110]: E0126 00:11:25.100007 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:25.599984369 +0000 UTC m=+142.828882988 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.103852 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4ba4d5a4-755b-4d6e-b250-cd705b244775-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-bjwbh\" (UID: \"4ba4d5a4-755b-4d6e-b250-cd705b244775\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bjwbh" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.106395 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.110460 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1eb870e-2fb9-4e38-94b6-80fb61ed6ad3-config\") pod \"service-ca-operator-5b9c976747-2dt85\" (UID: \"d1eb870e-2fb9-4e38-94b6-80fb61ed6ad3\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2dt85" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.126423 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 26 00:11:25 crc kubenswrapper[5110]: E0126 00:11:25.133750 5110 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Jan 26 00:11:25 crc kubenswrapper[5110]: E0126 00:11:25.133849 5110 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Jan 26 00:11:25 crc kubenswrapper[5110]: E0126 00:11:25.133856 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3451ac5b-8ad7-4419-b262-ec54012b9dc6-config-volume podName:3451ac5b-8ad7-4419-b262-ec54012b9dc6 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:25.633828628 +0000 UTC m=+142.862727237 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3451ac5b-8ad7-4419-b262-ec54012b9dc6-config-volume") pod "dns-default-vl2cl" (UID: "3451ac5b-8ad7-4419-b262-ec54012b9dc6") : failed to sync configmap cache: timed out waiting for the condition Jan 26 00:11:25 crc kubenswrapper[5110]: E0126 00:11:25.134442 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3451ac5b-8ad7-4419-b262-ec54012b9dc6-metrics-tls podName:3451ac5b-8ad7-4419-b262-ec54012b9dc6 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:25.634411664 +0000 UTC m=+142.863310273 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/3451ac5b-8ad7-4419-b262-ec54012b9dc6-metrics-tls") pod "dns-default-vl2cl" (UID: "3451ac5b-8ad7-4419-b262-ec54012b9dc6") : failed to sync secret cache: timed out waiting for the condition Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.145993 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.160303 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1eb870e-2fb9-4e38-94b6-80fb61ed6ad3-serving-cert\") pod \"service-ca-operator-5b9c976747-2dt85\" (UID: \"d1eb870e-2fb9-4e38-94b6-80fb61ed6ad3\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2dt85" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.166113 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.188114 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.201254 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:25 crc kubenswrapper[5110]: E0126 00:11:25.201521 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:25.701464123 +0000 UTC m=+142.930362722 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.201977 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:25 crc kubenswrapper[5110]: E0126 00:11:25.202526 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:25.702500153 +0000 UTC m=+142.931398802 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.206112 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.217919 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ce4b4a91-4053-4fb9-be7e-ac51e71a829e-srv-cert\") pod \"olm-operator-5cdf44d969-99xsn\" (UID: \"ce4b4a91-4053-4fb9-be7e-ac51e71a829e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-99xsn" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.229112 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.238695 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/765dcc91-6c2c-4e8e-a85d-b083d0a8e261-certs\") pod \"machine-config-server-gpd7g\" (UID: \"765dcc91-6c2c-4e8e-a85d-b083d0a8e261\") " pod="openshift-machine-config-operator/machine-config-server-gpd7g" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.246927 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.266976 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.279016 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/765dcc91-6c2c-4e8e-a85d-b083d0a8e261-node-bootstrap-token\") pod \"machine-config-server-gpd7g\" (UID: \"765dcc91-6c2c-4e8e-a85d-b083d0a8e261\") " pod="openshift-machine-config-operator/machine-config-server-gpd7g" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.286745 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.304510 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:25 crc kubenswrapper[5110]: E0126 00:11:25.304664 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:25.804636606 +0000 UTC m=+143.033535215 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.304784 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:25 crc kubenswrapper[5110]: E0126 00:11:25.305374 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:25.805350077 +0000 UTC m=+143.034248686 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.306239 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.326162 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.346240 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.367422 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 26 00:11:25 crc kubenswrapper[5110]: E0126 00:11:25.375928 5110 configmap.go:193] Couldn't get configMap openshift-multus/cni-sysctl-allowlist: failed to sync configmap cache: timed out waiting for the condition Jan 26 00:11:25 crc kubenswrapper[5110]: E0126 00:11:25.375870 5110 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 26 00:11:25 crc kubenswrapper[5110]: E0126 00:11:25.376023 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8c8a6437-b0cb-4825-999f-6e523fd394e9-cni-sysctl-allowlist podName:8c8a6437-b0cb-4825-999f-6e523fd394e9 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:25.875992229 +0000 UTC m=+143.104890838 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-sysctl-allowlist" (UniqueName: "kubernetes.io/configmap/8c8a6437-b0cb-4825-999f-6e523fd394e9-cni-sysctl-allowlist") pod "cni-sysctl-allowlist-ds-hvgtl" (UID: "8c8a6437-b0cb-4825-999f-6e523fd394e9") : failed to sync configmap cache: timed out waiting for the condition Jan 26 00:11:25 crc kubenswrapper[5110]: E0126 00:11:25.376058 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24efc76d-eb4f-4036-804e-a71705eb1a78-cert podName:24efc76d-eb4f-4036-804e-a71705eb1a78 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:25.876045221 +0000 UTC m=+143.104943830 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/24efc76d-eb4f-4036-804e-a71705eb1a78-cert") pod "ingress-canary-bqdwj" (UID: "24efc76d-eb4f-4036-804e-a71705eb1a78") : failed to sync secret cache: timed out waiting for the condition Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.383522 5110 request.go:752] "Waited before sending request" delay="1.535709101s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-default-metrics-tls&limit=500&resourceVersion=0" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.385390 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.406715 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:25 crc kubenswrapper[5110]: E0126 00:11:25.406907 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:25.906874762 +0000 UTC m=+143.135773361 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.407786 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:25 crc kubenswrapper[5110]: E0126 00:11:25.408257 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:25.908240542 +0000 UTC m=+143.137139151 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.520240 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:25 crc kubenswrapper[5110]: E0126 00:11:25.520892 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:26.020874858 +0000 UTC m=+143.249773467 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.521935 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkqvj\" (UniqueName: \"kubernetes.io/projected/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-kube-api-access-tkqvj\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.528432 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.530048 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.535330 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-tjq6l" event={"ID":"766da705-879b-4d57-a34d-43eca4c9da19","Type":"ContainerStarted","Data":"dfb21bca17626fd39735ee0b4f44ef92c0d44b79aea40ba7437ccd9a1332989e"} Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.535390 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-tjq6l" event={"ID":"766da705-879b-4d57-a34d-43eca4c9da19","Type":"ContainerStarted","Data":"7c53cadfa60a039df98eab5ea29000c79a09fa1e1d936b811ffabc20bc0e8253"} Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.546606 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5rck\" (UniqueName: \"kubernetes.io/projected/1b2941d0-33a9-475b-9ed9-5534ffda45c6-kube-api-access-k5rck\") pod \"kube-storage-version-migrator-operator-565b79b866-gk4w9\" (UID: \"1b2941d0-33a9-475b-9ed9-5534ffda45c6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-gk4w9" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.547946 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.548607 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-clsvv\" (UniqueName: \"kubernetes.io/projected/e2948eb5-9117-467a-89c9-742dde24b958-kube-api-access-clsvv\") pod \"route-controller-manager-776cdc94d6-rwwx4\" (UID: \"e2948eb5-9117-467a-89c9-742dde24b958\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rwwx4" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.548717 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-bound-sa-token\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.548886 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhp28\" (UniqueName: \"kubernetes.io/projected/2438617c-3b33-4b33-971c-afaab481cfe6-kube-api-access-mhp28\") pod \"downloads-747b44746d-77449\" (UID: \"2438617c-3b33-4b33-971c-afaab481cfe6\") " pod="openshift-console/downloads-747b44746d-77449" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.560959 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-46ljf" event={"ID":"6314e04a-09b6-48f6-950c-271199f2f803","Type":"ContainerStarted","Data":"648e99eaf95a2e8e0cf3fae6279c099ac0b6aae76462b007b9b7e14b603c1155"} Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.561563 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-46ljf" event={"ID":"6314e04a-09b6-48f6-950c-271199f2f803","Type":"ContainerStarted","Data":"d7437afdb169de1da651c2f801a697784ecbde6853e5cb5372bf3d1a9c152a4f"} Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.569046 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-2gvm5" event={"ID":"acf81ab8-6555-4503-b083-452fc4d249c1","Type":"ContainerStarted","Data":"eed45724adb37c4a5354dc3d750c7b2c20dd23add2eb86c0648bbb6700642a25"} Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.569261 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-2gvm5" event={"ID":"acf81ab8-6555-4503-b083-452fc4d249c1","Type":"ContainerStarted","Data":"165f7a84f07329cdcda6642be6084349bc9e558e0a075c4733aaac26ab0419e2"} Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.570362 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rwwx4" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.571955 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.575699 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" event={"ID":"0cc5795e-df15-48c8-949a-7254633f27e3","Type":"ContainerStarted","Data":"584cc62509629b866d5c175006952d534da0f06b0510077bde2a810a46777ff0"} Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.577852 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-5jp8q" event={"ID":"bad86cda-e9f6-44e8-84dd-966fc1a2434b","Type":"ContainerStarted","Data":"7f3a327d0ddd868ca61c996a15dd29980a6539a79735f0744eaa1802f1f064b7"} Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.579648 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-gk4w9" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.581298 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-cg4nl" event={"ID":"aaaae9da-7fe6-47c9-8a79-20343d5f6c43","Type":"ContainerStarted","Data":"3c114895dd5592e4295e0bd8d5eab116b4eb74f550e7e7f754f53934a3c73dd2"} Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.586577 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.587249 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-8ssgr" event={"ID":"9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb","Type":"ContainerStarted","Data":"a1d2ffe9aa28d184a3128e2530810a13ca2ed60200a125c00979d325cc96f2c3"} Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.587291 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-8ssgr" event={"ID":"9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb","Type":"ContainerStarted","Data":"d5c7431865f8c6d06a8fa85405b20def9940d3d3b870449882766b19a541cb0c"} Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.588451 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-8ssgr" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.592815 5110 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-8ssgr container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.592894 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-8ssgr" podUID="9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.595164 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-ngcjz" event={"ID":"125ce521-01f2-4198-b07e-e538e248c82f","Type":"ContainerStarted","Data":"41a5674a9f8ee206bdebec36362183c0941201d99ef6d7c007c75bbea32dd0f3"} Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.597707 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-hsvjl" event={"ID":"ecc19229-da38-4ff6-bc7a-d5864b4d1101","Type":"ContainerStarted","Data":"b6d17306a92bcf17a97a0da9dba52a45a644a8a3cdc7a2af85ce5fdfc0899021"} Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.598955 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bfssk" event={"ID":"be1e2506-51b5-4287-a99d-37df39a44ff7","Type":"ContainerStarted","Data":"e5810e1347e22a9d13bbf84ac25cf9355b22ea107883d0636e5c33daca5e0d2d"} Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.601198 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-7plfs" event={"ID":"6bf41d48-487f-4123-9d83-bef3d1efaa68","Type":"ContainerStarted","Data":"245bc54ec665e584cf3b6a9bb29abd60facb6fb2a6b2c8aca26afc8868bd485e"} Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.601970 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" event={"ID":"b4b9ee78-19a7-41bf-97e4-9ab13bad2730","Type":"ContainerStarted","Data":"25de0b13940cacbf707b15fc9d23a9accb6bae86aaa0bb0b92de6c4367344819"} Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.623422 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:25 crc kubenswrapper[5110]: E0126 00:11:25.624365 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:26.12434875 +0000 UTC m=+143.353247349 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.630506 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4nnv\" (UniqueName: \"kubernetes.io/projected/3451ac5b-8ad7-4419-b262-ec54012b9dc6-kube-api-access-m4nnv\") pod \"dns-default-vl2cl\" (UID: \"3451ac5b-8ad7-4419-b262-ec54012b9dc6\") " pod="openshift-dns/dns-default-vl2cl" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.667579 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwjd4\" (UniqueName: \"kubernetes.io/projected/defa6a4f-7d53-4d4c-b0c7-cfb7489db02c-kube-api-access-bwjd4\") pod \"collect-profiles-29489760-vd2k2\" (UID: \"defa6a4f-7d53-4d4c-b0c7-cfb7489db02c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-vd2k2" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.678361 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdjsf\" (UniqueName: \"kubernetes.io/projected/c451c2d9-b218-4494-93e4-200ef1a5eb75-kube-api-access-wdjsf\") pod \"openshift-config-operator-5777786469-979dq\" (UID: \"c451c2d9-b218-4494-93e4-200ef1a5eb75\") " pod="openshift-config-operator/openshift-config-operator-5777786469-979dq" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.684013 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmls6\" (UniqueName: \"kubernetes.io/projected/925904bd-cb57-4356-a6df-e72bd716398b-kube-api-access-kmls6\") pod \"authentication-operator-7f5c659b84-qknx7\" (UID: \"925904bd-cb57-4356-a6df-e72bd716398b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-qknx7" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.715412 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-979dq" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.725253 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:25 crc kubenswrapper[5110]: E0126 00:11:25.725639 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:26.225503825 +0000 UTC m=+143.454402564 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.725952 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3451ac5b-8ad7-4419-b262-ec54012b9dc6-config-volume\") pod \"dns-default-vl2cl\" (UID: \"3451ac5b-8ad7-4419-b262-ec54012b9dc6\") " pod="openshift-dns/dns-default-vl2cl" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.726286 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.726339 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3451ac5b-8ad7-4419-b262-ec54012b9dc6-metrics-tls\") pod \"dns-default-vl2cl\" (UID: \"3451ac5b-8ad7-4419-b262-ec54012b9dc6\") " pod="openshift-dns/dns-default-vl2cl" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.729213 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3451ac5b-8ad7-4419-b262-ec54012b9dc6-config-volume\") pod \"dns-default-vl2cl\" (UID: \"3451ac5b-8ad7-4419-b262-ec54012b9dc6\") " pod="openshift-dns/dns-default-vl2cl" Jan 26 00:11:25 crc kubenswrapper[5110]: E0126 00:11:25.775418 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:26.275393067 +0000 UTC m=+143.504291676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.777726 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcd2z\" (UniqueName: \"kubernetes.io/projected/1e6d8af4-164c-4fc2-9032-596a1de18a11-kube-api-access-vcd2z\") pod \"migrator-866fcbc849-c49nj\" (UID: \"1e6d8af4-164c-4fc2-9032-596a1de18a11\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-c49nj" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.783898 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-79nbg\" (UniqueName: \"kubernetes.io/projected/16e64d17-7476-4caf-af16-73d93e2c1085-kube-api-access-79nbg\") pod \"packageserver-7d4fc7d867-7fnwf\" (UID: \"16e64d17-7476-4caf-af16-73d93e2c1085\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-7fnwf" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.786002 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-77449" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.790760 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-c49nj" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.796553 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3451ac5b-8ad7-4419-b262-ec54012b9dc6-metrics-tls\") pod \"dns-default-vl2cl\" (UID: \"3451ac5b-8ad7-4419-b262-ec54012b9dc6\") " pod="openshift-dns/dns-default-vl2cl" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.799273 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n62zj\" (UniqueName: \"kubernetes.io/projected/1d983c82-449b-4550-986f-5d378560b332-kube-api-access-n62zj\") pod \"openshift-apiserver-operator-846cbfc458-8chxh\" (UID: \"1d983c82-449b-4550-986f-5d378560b332\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-8chxh" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.804209 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-vd2k2" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.813620 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-7fnwf" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.842255 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.842875 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g67j4\" (UniqueName: \"kubernetes.io/projected/3a884209-147f-4786-9a92-5eb1a685dc1b-kube-api-access-g67j4\") pod \"package-server-manager-77f986bd66-h4rkf\" (UID: \"3a884209-147f-4786-9a92-5eb1a685dc1b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-h4rkf" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.844014 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tj9k5\" (UniqueName: \"kubernetes.io/projected/765dcc91-6c2c-4e8e-a85d-b083d0a8e261-kube-api-access-tj9k5\") pod \"machine-config-server-gpd7g\" (UID: \"765dcc91-6c2c-4e8e-a85d-b083d0a8e261\") " pod="openshift-machine-config-operator/machine-config-server-gpd7g" Jan 26 00:11:25 crc kubenswrapper[5110]: E0126 00:11:25.847749 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:26.347366578 +0000 UTC m=+143.576265197 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.852236 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-db2dq\" (UniqueName: \"kubernetes.io/projected/d7cb0851-2110-44f5-b3fc-77478d1fd49f-kube-api-access-db2dq\") pod \"multus-admission-controller-69db94689b-g57zz\" (UID: \"d7cb0851-2110-44f5-b3fc-77478d1fd49f\") " pod="openshift-multus/multus-admission-controller-69db94689b-g57zz" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.854590 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq9mh\" (UniqueName: \"kubernetes.io/projected/e17b81d4-9c4f-44ee-acaa-e008890ca2fd-kube-api-access-cq9mh\") pod \"catalog-operator-75ff9f647d-pcrzq\" (UID: \"e17b81d4-9c4f-44ee-acaa-e008890ca2fd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-pcrzq" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.875527 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-g57zz" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.878327 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sklkw\" (UniqueName: \"kubernetes.io/projected/3bcb987d-ffcc-436f-b7fb-75a4d2e4952d-kube-api-access-sklkw\") pod \"console-operator-67c89758df-67svm\" (UID: \"3bcb987d-ffcc-436f-b7fb-75a4d2e4952d\") " pod="openshift-console-operator/console-operator-67c89758df-67svm" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.896113 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-h4rkf" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.897770 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6m5r\" (UniqueName: \"kubernetes.io/projected/497eadd3-187e-4dfb-82c3-fad0d59eb723-kube-api-access-b6m5r\") pod \"image-pruner-29489760-jjbnv\" (UID: \"497eadd3-187e-4dfb-82c3-fad0d59eb723\") " pod="openshift-image-registry/image-pruner-29489760-jjbnv" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.907759 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6b4c71ca-e2f6-48d9-832a-1eca4fbcd9f3-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-c5l6l\" (UID: \"6b4c71ca-e2f6-48d9-832a-1eca4fbcd9f3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-c5l6l" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.912906 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqdts\" (UniqueName: \"kubernetes.io/projected/49efa7f2-d990-4552-80cf-5d4a72a32ec7-kube-api-access-hqdts\") pod \"router-default-68cf44c8b8-n929c\" (UID: \"49efa7f2-d990-4552-80cf-5d4a72a32ec7\") " pod="openshift-ingress/router-default-68cf44c8b8-n929c" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.915162 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-67svm" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.930839 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvhfg\" (UniqueName: \"kubernetes.io/projected/10d2b7d6-0a18-4f5c-b3b8-2c6f34e88730-kube-api-access-bvhfg\") pod \"machine-api-operator-755bb95488-jptld\" (UID: \"10d2b7d6-0a18-4f5c-b3b8-2c6f34e88730\") " pod="openshift-machine-api/machine-api-operator-755bb95488-jptld" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.931442 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29489760-jjbnv" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.941068 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4jkm\" (UniqueName: \"kubernetes.io/projected/ddb3d5eb-9401-4032-9bcd-c2f798fbaf51-kube-api-access-b4jkm\") pod \"cluster-image-registry-operator-86c45576b9-t49fp\" (UID: \"ddb3d5eb-9401-4032-9bcd-c2f798fbaf51\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t49fp" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.953085 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-pcrzq" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.955377 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/24efc76d-eb4f-4036-804e-a71705eb1a78-cert\") pod \"ingress-canary-bqdwj\" (UID: \"24efc76d-eb4f-4036-804e-a71705eb1a78\") " pod="openshift-ingress-canary/ingress-canary-bqdwj" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.955458 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8c8a6437-b0cb-4825-999f-6e523fd394e9-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-hvgtl\" (UID: \"8c8a6437-b0cb-4825-999f-6e523fd394e9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvgtl" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.955600 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.960909 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-qknx7" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.962143 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8c8a6437-b0cb-4825-999f-6e523fd394e9-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-hvgtl\" (UID: \"8c8a6437-b0cb-4825-999f-6e523fd394e9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvgtl" Jan 26 00:11:25 crc kubenswrapper[5110]: E0126 00:11:25.962789 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:26.462751754 +0000 UTC m=+143.691650363 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.983521 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/24efc76d-eb4f-4036-804e-a71705eb1a78-cert\") pod \"ingress-canary-bqdwj\" (UID: \"24efc76d-eb4f-4036-804e-a71705eb1a78\") " pod="openshift-ingress-canary/ingress-canary-bqdwj" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.989466 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbpsj\" (UniqueName: \"kubernetes.io/projected/0be01a81-9d33-487a-abf9-0284a7b3f24b-kube-api-access-kbpsj\") pod \"service-ca-74545575db-l6gvr\" (UID: \"0be01a81-9d33-487a-abf9-0284a7b3f24b\") " pod="openshift-service-ca/service-ca-74545575db-l6gvr" Jan 26 00:11:25 crc kubenswrapper[5110]: I0126 00:11:25.994180 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p894f\" (UniqueName: \"kubernetes.io/projected/ce4b4a91-4053-4fb9-be7e-ac51e71a829e-kube-api-access-p894f\") pod \"olm-operator-5cdf44d969-99xsn\" (UID: \"ce4b4a91-4053-4fb9-be7e-ac51e71a829e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-99xsn" Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.019413 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-jptld" Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.022249 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6dh9\" (UniqueName: \"kubernetes.io/projected/5f875209-3895-465c-9992-37ef91c4dda9-kube-api-access-t6dh9\") pod \"machine-config-controller-f9cdd68f7-55st2\" (UID: \"5f875209-3895-465c-9992-37ef91c4dda9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-55st2" Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.023563 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kjn2\" (UniqueName: \"kubernetes.io/projected/1cea4c4b-a35b-4b39-a268-15234da1bc4a-kube-api-access-6kjn2\") pod \"machine-approver-54c688565-tgtdw\" (UID: \"1cea4c4b-a35b-4b39-a268-15234da1bc4a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-tgtdw" Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.065346 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:26 crc kubenswrapper[5110]: E0126 00:11:26.065886 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:26.565865276 +0000 UTC m=+143.794763885 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.069893 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-vl2cl" Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.079043 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-8chxh" Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.111277 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-gpd7g" Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.174765 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-n929c" Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.175891 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:26 crc kubenswrapper[5110]: E0126 00:11:26.176214 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:26.676200386 +0000 UTC m=+143.905098995 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.192132 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6kl9\" (UniqueName: \"kubernetes.io/projected/8c8a6437-b0cb-4825-999f-6e523fd394e9-kube-api-access-z6kl9\") pod \"cni-sysctl-allowlist-ds-hvgtl\" (UID: \"8c8a6437-b0cb-4825-999f-6e523fd394e9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvgtl" Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.192140 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8xsx\" (UniqueName: \"kubernetes.io/projected/24efc76d-eb4f-4036-804e-a71705eb1a78-kube-api-access-w8xsx\") pod \"ingress-canary-bqdwj\" (UID: \"24efc76d-eb4f-4036-804e-a71705eb1a78\") " pod="openshift-ingress-canary/ingress-canary-bqdwj" Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.193596 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-24hmk\" (UniqueName: \"kubernetes.io/projected/4464550b-e8e5-4f07-8e54-40eb792eb201-kube-api-access-24hmk\") pod \"csi-hostpathplugin-pzlvv\" (UID: \"4464550b-e8e5-4f07-8e54-40eb792eb201\") " pod="hostpath-provisioner/csi-hostpathplugin-pzlvv" Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.194034 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-c5l6l" Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.195024 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfnsx\" (UniqueName: \"kubernetes.io/projected/d1eb870e-2fb9-4e38-94b6-80fb61ed6ad3-kube-api-access-gfnsx\") pod \"service-ca-operator-5b9c976747-2dt85\" (UID: \"d1eb870e-2fb9-4e38-94b6-80fb61ed6ad3\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2dt85" Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.196549 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-55st2" Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.197375 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ddb3d5eb-9401-4032-9bcd-c2f798fbaf51-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-t49fp\" (UID: \"ddb3d5eb-9401-4032-9bcd-c2f798fbaf51\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t49fp" Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.204028 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t49fp" Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.205150 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b424\" (UniqueName: \"kubernetes.io/projected/9cf38d4d-d415-4e81-85c3-bb2c65f7423b-kube-api-access-9b424\") pod \"machine-config-operator-67c9d58cbb-4j4dd\" (UID: \"9cf38d4d-d415-4e81-85c3-bb2c65f7423b\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4j4dd" Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.216782 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-l6gvr" Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.216824 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5fxf\" (UniqueName: \"kubernetes.io/projected/77aa983a-f3c1-4799-a84b-c3c7a381a1bc-kube-api-access-k5fxf\") pod \"control-plane-machine-set-operator-75ffdb6fcd-crnjv\" (UID: \"77aa983a-f3c1-4799-a84b-c3c7a381a1bc\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-crnjv" Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.222644 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-crnjv" Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.225051 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhpqv\" (UniqueName: \"kubernetes.io/projected/4ba4d5a4-755b-4d6e-b250-cd705b244775-kube-api-access-mhpqv\") pod \"marketplace-operator-547dbd544d-bjwbh\" (UID: \"4ba4d5a4-755b-4d6e-b250-cd705b244775\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bjwbh" Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.316318 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-99xsn" Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.317281 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-tgtdw" Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.318109 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4j4dd" Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.318195 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-pzlvv" Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.318828 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:26 crc kubenswrapper[5110]: E0126 00:11:26.318980 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:26.818953663 +0000 UTC m=+144.047852272 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.319238 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:26 crc kubenswrapper[5110]: E0126 00:11:26.320222 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:26.820195639 +0000 UTC m=+144.049094248 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.415123 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-bjwbh" Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.416438 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-bqdwj" Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.420243 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:26 crc kubenswrapper[5110]: E0126 00:11:26.420549 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:26.92052569 +0000 UTC m=+144.149424299 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.430225 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-hvgtl" Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.464079 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2dt85" Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.478507 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-gk4w9"] Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.521528 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:26 crc kubenswrapper[5110]: E0126 00:11:26.521930 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:27.021916681 +0000 UTC m=+144.250815290 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.609181 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-ngcjz" event={"ID":"125ce521-01f2-4198-b07e-e538e248c82f","Type":"ContainerStarted","Data":"6d73a7da1b50d3baa1705d0314c519f23f7dd0d9fae9e80e525027782586e57d"} Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.611003 5110 generic.go:358] "Generic (PLEG): container finished" podID="ecc19229-da38-4ff6-bc7a-d5864b4d1101" containerID="175734c0a6fbb0debfcc42acd7818a774058a0bed72f8718db1a8aa8074c4065" exitCode=0 Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.611063 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-hsvjl" event={"ID":"ecc19229-da38-4ff6-bc7a-d5864b4d1101","Type":"ContainerDied","Data":"175734c0a6fbb0debfcc42acd7818a774058a0bed72f8718db1a8aa8074c4065"} Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.613112 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bfssk" event={"ID":"be1e2506-51b5-4287-a99d-37df39a44ff7","Type":"ContainerStarted","Data":"bb44acf689269c7cf85b86f83addc50a7e4099af814e771240502bd7fc297996"} Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.615423 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-7plfs" event={"ID":"6bf41d48-487f-4123-9d83-bef3d1efaa68","Type":"ContainerStarted","Data":"6a357ec6666c924405df5c0d7d22768ca0b2d9541b838df1e589990bb27fa212"} Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.616726 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" event={"ID":"b4b9ee78-19a7-41bf-97e4-9ab13bad2730","Type":"ContainerStarted","Data":"95597becabb11d191639e23ff0da6faf49de134608802bdcff1e857e652878e3"} Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.617587 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.618974 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-46ljf" event={"ID":"6314e04a-09b6-48f6-950c-271199f2f803","Type":"ContainerStarted","Data":"1de50278eb50c4f72b62ff9e6fa1d9d6db06e43aad77b21285424ca5e72089e5"} Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.620420 5110 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-tg86c container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.620450 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" podUID="b4b9ee78-19a7-41bf-97e4-9ab13bad2730" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.621492 5110 generic.go:358] "Generic (PLEG): container finished" podID="0cc5795e-df15-48c8-949a-7254633f27e3" containerID="6e5d1b7da9ec99e91f2ad80998fa15e806f24fb083bd35179f26b7714733ffc7" exitCode=0 Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.621532 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" event={"ID":"0cc5795e-df15-48c8-949a-7254633f27e3","Type":"ContainerDied","Data":"6e5d1b7da9ec99e91f2ad80998fa15e806f24fb083bd35179f26b7714733ffc7"} Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.622322 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:26 crc kubenswrapper[5110]: E0126 00:11:26.624144 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:27.124056635 +0000 UTC m=+144.352955244 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.631888 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-5jp8q" event={"ID":"bad86cda-e9f6-44e8-84dd-966fc1a2434b","Type":"ContainerStarted","Data":"b4f8ae6925015992e81ab6b65d34a1d71427cb0b5aadb1329929ceee2e8e72d9"} Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.744576 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:26 crc kubenswrapper[5110]: E0126 00:11:26.746289 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:27.246273338 +0000 UTC m=+144.475171947 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.850063 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:26 crc kubenswrapper[5110]: E0126 00:11:26.850860 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:27.350822981 +0000 UTC m=+144.579721590 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.855147 5110 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-8ssgr container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.855226 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-8ssgr" podUID="9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.855331 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-cg4nl" event={"ID":"aaaae9da-7fe6-47c9-8a79-20343d5f6c43","Type":"ContainerStarted","Data":"85f24efa308d378829c9a0ae17518cfc20eec08f42fd4191d0d1071e4a155160"} Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.861760 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-979dq"] Jan 26 00:11:26 crc kubenswrapper[5110]: I0126 00:11:26.989789 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:26 crc kubenswrapper[5110]: E0126 00:11:26.990416 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:27.490394196 +0000 UTC m=+144.719292805 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:27 crc kubenswrapper[5110]: I0126 00:11:27.094526 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:27 crc kubenswrapper[5110]: E0126 00:11:27.094916 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:27.594868347 +0000 UTC m=+144.823766956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:27 crc kubenswrapper[5110]: I0126 00:11:27.099671 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:27 crc kubenswrapper[5110]: E0126 00:11:27.100194 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:27.60016275 +0000 UTC m=+144.829061359 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:27 crc kubenswrapper[5110]: I0126 00:11:27.200627 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:27 crc kubenswrapper[5110]: I0126 00:11:27.308125 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-tjq6l" podStartSLOduration=124.308102412 podStartE2EDuration="2m4.308102412s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:27.297039952 +0000 UTC m=+144.525938571" watchObservedRunningTime="2026-01-26 00:11:27.308102412 +0000 UTC m=+144.537001021" Jan 26 00:11:27 crc kubenswrapper[5110]: E0126 00:11:27.444027 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:27.943988291 +0000 UTC m=+145.172886900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:27 crc kubenswrapper[5110]: I0126 00:11:27.507163 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:27 crc kubenswrapper[5110]: E0126 00:11:27.507510 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:28.007492617 +0000 UTC m=+145.236391226 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:27 crc kubenswrapper[5110]: I0126 00:11:27.562873 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-8ssgr" podStartSLOduration=124.562853568 podStartE2EDuration="2m4.562853568s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:27.56190846 +0000 UTC m=+144.790807079" watchObservedRunningTime="2026-01-26 00:11:27.562853568 +0000 UTC m=+144.791752167" Jan 26 00:11:27 crc kubenswrapper[5110]: I0126 00:11:27.608462 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:27 crc kubenswrapper[5110]: E0126 00:11:27.608947 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:28.10892925 +0000 UTC m=+145.337827859 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:27 crc kubenswrapper[5110]: I0126 00:11:27.649301 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-2gvm5" podStartSLOduration=124.649282117 podStartE2EDuration="2m4.649282117s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:27.64764964 +0000 UTC m=+144.876548249" watchObservedRunningTime="2026-01-26 00:11:27.649282117 +0000 UTC m=+144.878180726" Jan 26 00:11:27 crc kubenswrapper[5110]: I0126 00:11:27.709774 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:27 crc kubenswrapper[5110]: E0126 00:11:27.710252 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:28.210229619 +0000 UTC m=+145.439128408 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:27 crc kubenswrapper[5110]: I0126 00:11:27.834148 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:27 crc kubenswrapper[5110]: E0126 00:11:27.834610 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:28.334583144 +0000 UTC m=+145.563481753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:27 crc kubenswrapper[5110]: I0126 00:11:27.940384 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:27 crc kubenswrapper[5110]: E0126 00:11:27.941030 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:28.441001141 +0000 UTC m=+145.669899740 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:27 crc kubenswrapper[5110]: I0126 00:11:27.951666 5110 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-tg86c container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Jan 26 00:11:27 crc kubenswrapper[5110]: I0126 00:11:27.951751 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" podUID="b4b9ee78-19a7-41bf-97e4-9ab13bad2730" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Jan 26 00:11:28 crc kubenswrapper[5110]: I0126 00:11:28.041995 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:28 crc kubenswrapper[5110]: E0126 00:11:28.044725 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:28.544682989 +0000 UTC m=+145.773581608 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:28 crc kubenswrapper[5110]: I0126 00:11:28.396117 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:28 crc kubenswrapper[5110]: E0126 00:11:28.396505 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:28.89649122 +0000 UTC m=+146.125389829 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:28 crc kubenswrapper[5110]: I0126 00:11:28.578484 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:28 crc kubenswrapper[5110]: I0126 00:11:28.578622 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" podStartSLOduration=125.578587005 podStartE2EDuration="2m5.578587005s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:28.577764871 +0000 UTC m=+145.806663480" watchObservedRunningTime="2026-01-26 00:11:28.578587005 +0000 UTC m=+145.807485614" Jan 26 00:11:28 crc kubenswrapper[5110]: E0126 00:11:28.578863 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:29.078837212 +0000 UTC m=+146.307735821 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:28 crc kubenswrapper[5110]: I0126 00:11:28.705127 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:28 crc kubenswrapper[5110]: E0126 00:11:28.705509 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:29.205485724 +0000 UTC m=+146.434384333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:28 crc kubenswrapper[5110]: I0126 00:11:28.882385 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:28 crc kubenswrapper[5110]: E0126 00:11:28.882579 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:29.382542843 +0000 UTC m=+146.611441452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:28 crc kubenswrapper[5110]: I0126 00:11:28.883319 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:28 crc kubenswrapper[5110]: E0126 00:11:28.883683 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:29.383671455 +0000 UTC m=+146.612570064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:28 crc kubenswrapper[5110]: I0126 00:11:28.936880 5110 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-8ssgr container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 00:11:28 crc kubenswrapper[5110]: I0126 00:11:28.936960 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-8ssgr" podUID="9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 00:11:28 crc kubenswrapper[5110]: I0126 00:11:28.984619 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:28 crc kubenswrapper[5110]: E0126 00:11:28.984919 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:29.484868701 +0000 UTC m=+146.713767310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:28 crc kubenswrapper[5110]: I0126 00:11:28.985238 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:28 crc kubenswrapper[5110]: E0126 00:11:28.985749 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:29.485720076 +0000 UTC m=+146.714618685 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:28 crc kubenswrapper[5110]: W0126 00:11:28.986846 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c8a6437_b0cb_4825_999f_6e523fd394e9.slice/crio-312daff4e7a4a9436a5b5a95add3d9d9836fad29a40674854b5016b9d6432c8d WatchSource:0}: Error finding container 312daff4e7a4a9436a5b5a95add3d9d9836fad29a40674854b5016b9d6432c8d: Status 404 returned error can't find the container with id 312daff4e7a4a9436a5b5a95add3d9d9836fad29a40674854b5016b9d6432c8d Jan 26 00:11:29 crc kubenswrapper[5110]: I0126 00:11:29.145573 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:29 crc kubenswrapper[5110]: E0126 00:11:29.145771 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:29.645738783 +0000 UTC m=+146.874637392 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:29 crc kubenswrapper[5110]: I0126 00:11:29.146629 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:29 crc kubenswrapper[5110]: E0126 00:11:29.146931 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:29.646913956 +0000 UTC m=+146.875812565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:29 crc kubenswrapper[5110]: I0126 00:11:29.248632 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:29 crc kubenswrapper[5110]: E0126 00:11:29.248814 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:29.748776562 +0000 UTC m=+146.977675171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:29 crc kubenswrapper[5110]: I0126 00:11:29.248981 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:29 crc kubenswrapper[5110]: E0126 00:11:29.249410 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:29.74939998 +0000 UTC m=+146.978298599 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:29 crc kubenswrapper[5110]: I0126 00:11:29.352138 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:29 crc kubenswrapper[5110]: E0126 00:11:29.352306 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:29.852262754 +0000 UTC m=+147.081161363 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:29 crc kubenswrapper[5110]: I0126 00:11:29.352617 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:29 crc kubenswrapper[5110]: E0126 00:11:29.353057 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:29.853034086 +0000 UTC m=+147.081932695 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:29 crc kubenswrapper[5110]: I0126 00:11:29.364283 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:11:29 crc kubenswrapper[5110]: I0126 00:11:29.372696 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-7plfs" podStartSLOduration=126.372676824 podStartE2EDuration="2m6.372676824s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:29.214072778 +0000 UTC m=+146.442971387" watchObservedRunningTime="2026-01-26 00:11:29.372676824 +0000 UTC m=+146.601575433" Jan 26 00:11:29 crc kubenswrapper[5110]: I0126 00:11:29.374856 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-5jp8q" podStartSLOduration=126.374847017 podStartE2EDuration="2m6.374847017s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:29.230434331 +0000 UTC m=+146.459332960" watchObservedRunningTime="2026-01-26 00:11:29.374847017 +0000 UTC m=+146.603745616" Jan 26 00:11:29 crc kubenswrapper[5110]: I0126 00:11:29.376970 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-46ljf" podStartSLOduration=126.376964788 podStartE2EDuration="2m6.376964788s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:29.266426902 +0000 UTC m=+146.495325531" watchObservedRunningTime="2026-01-26 00:11:29.376964788 +0000 UTC m=+146.605863397" Jan 26 00:11:29 crc kubenswrapper[5110]: I0126 00:11:29.413023 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-ngcjz" podStartSLOduration=126.41300665 podStartE2EDuration="2m6.41300665s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:29.392477976 +0000 UTC m=+146.621376585" watchObservedRunningTime="2026-01-26 00:11:29.41300665 +0000 UTC m=+146.641905259" Jan 26 00:11:29 crc kubenswrapper[5110]: I0126 00:11:29.462644 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:29 crc kubenswrapper[5110]: E0126 00:11:29.464561 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:29.96454011 +0000 UTC m=+147.193438719 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:29 crc kubenswrapper[5110]: I0126 00:11:29.632993 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:29 crc kubenswrapper[5110]: E0126 00:11:29.639009 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:30.138987754 +0000 UTC m=+147.367886363 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:29 crc kubenswrapper[5110]: I0126 00:11:29.686171 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:11:29 crc kubenswrapper[5110]: I0126 00:11:29.734484 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:29 crc kubenswrapper[5110]: E0126 00:11:29.735112 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:30.235081742 +0000 UTC m=+147.463980351 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:29 crc kubenswrapper[5110]: I0126 00:11:29.845675 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:29 crc kubenswrapper[5110]: E0126 00:11:29.846376 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:30.346356709 +0000 UTC m=+147.575255318 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:30 crc kubenswrapper[5110]: I0126 00:11:30.035426 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:30 crc kubenswrapper[5110]: E0126 00:11:30.036017 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:30.535990902 +0000 UTC m=+147.764889501 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:30 crc kubenswrapper[5110]: I0126 00:11:30.152216 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:30 crc kubenswrapper[5110]: E0126 00:11:30.153104 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:30.653077748 +0000 UTC m=+147.881976357 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:30 crc kubenswrapper[5110]: I0126 00:11:30.181123 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-hvgtl" event={"ID":"8c8a6437-b0cb-4825-999f-6e523fd394e9","Type":"ContainerStarted","Data":"312daff4e7a4a9436a5b5a95add3d9d9836fad29a40674854b5016b9d6432c8d"} Jan 26 00:11:30 crc kubenswrapper[5110]: I0126 00:11:30.482671 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:30 crc kubenswrapper[5110]: I0126 00:11:30.483661 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:30 crc kubenswrapper[5110]: I0126 00:11:30.483814 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/040d3d5f-c02a-4a70-92af-70700fd9e3c3-metrics-certs\") pod \"network-metrics-daemon-8ndzr\" (UID: \"040d3d5f-c02a-4a70-92af-70700fd9e3c3\") " pod="openshift-multus/network-metrics-daemon-8ndzr" Jan 26 00:11:30 crc kubenswrapper[5110]: I0126 00:11:30.483892 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:30 crc kubenswrapper[5110]: I0126 00:11:30.483948 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:30 crc kubenswrapper[5110]: I0126 00:11:30.483975 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:30 crc kubenswrapper[5110]: I0126 00:11:30.485176 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:30 crc kubenswrapper[5110]: E0126 00:11:30.485382 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:30.985312974 +0000 UTC m=+148.214211583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:30 crc kubenswrapper[5110]: I0126 00:11:30.504481 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:30 crc kubenswrapper[5110]: I0126 00:11:30.506660 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:30 crc kubenswrapper[5110]: I0126 00:11:30.509618 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:30 crc kubenswrapper[5110]: I0126 00:11:30.524894 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/040d3d5f-c02a-4a70-92af-70700fd9e3c3-metrics-certs\") pod \"network-metrics-daemon-8ndzr\" (UID: \"040d3d5f-c02a-4a70-92af-70700fd9e3c3\") " pod="openshift-multus/network-metrics-daemon-8ndzr" Jan 26 00:11:30 crc kubenswrapper[5110]: I0126 00:11:30.565923 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:30 crc kubenswrapper[5110]: I0126 00:11:30.566093 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8ndzr" Jan 26 00:11:30 crc kubenswrapper[5110]: I0126 00:11:30.567125 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:30 crc kubenswrapper[5110]: I0126 00:11:30.576780 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:30 crc kubenswrapper[5110]: I0126 00:11:30.585288 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:30 crc kubenswrapper[5110]: E0126 00:11:30.586836 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:31.086818779 +0000 UTC m=+148.315717378 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:30 crc kubenswrapper[5110]: I0126 00:11:30.606660 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-979dq" event={"ID":"c451c2d9-b218-4494-93e4-200ef1a5eb75","Type":"ContainerStarted","Data":"3d7ae79eba37362bd5bed9457a5dfedc11ff038d3ef3d3e359f718022d18151a"} Jan 26 00:11:30 crc kubenswrapper[5110]: I0126 00:11:30.613534 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-tgtdw" event={"ID":"1cea4c4b-a35b-4b39-a268-15234da1bc4a","Type":"ContainerStarted","Data":"2a17a15fe30e5789802ad501e8742bb34ec9f8d2dde03da18c6e4822dcc9cb17"} Jan 26 00:11:30 crc kubenswrapper[5110]: I0126 00:11:30.624904 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-gpd7g" event={"ID":"765dcc91-6c2c-4e8e-a85d-b083d0a8e261","Type":"ContainerStarted","Data":"8b35c6e1f850f55b2eb97419df83b930217639d42daf91b0dfa036d8fe3dc41a"} Jan 26 00:11:30 crc kubenswrapper[5110]: I0126 00:11:30.626622 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-n929c" event={"ID":"49efa7f2-d990-4552-80cf-5d4a72a32ec7","Type":"ContainerStarted","Data":"260a4bf86eaa1a386a5e8c43fbd160f1db55845bfe9c27bdecf023fc3cdd45b7"} Jan 26 00:11:30 crc kubenswrapper[5110]: I0126 00:11:30.629478 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-gk4w9" event={"ID":"1b2941d0-33a9-475b-9ed9-5534ffda45c6","Type":"ContainerStarted","Data":"69fb9691c2eb36c205a369682d8bd30e7e84c2620d10d2704942306e1bf950b1"} Jan 26 00:11:30 crc kubenswrapper[5110]: I0126 00:11:30.691431 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:30 crc kubenswrapper[5110]: E0126 00:11:30.691781 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:31.191760033 +0000 UTC m=+148.420658642 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:30 crc kubenswrapper[5110]: I0126 00:11:30.837050 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:30 crc kubenswrapper[5110]: E0126 00:11:30.839672 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:31.339655349 +0000 UTC m=+148.568553958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:30 crc kubenswrapper[5110]: I0126 00:11:30.939285 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:30 crc kubenswrapper[5110]: E0126 00:11:30.939631 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:31.439598829 +0000 UTC m=+148.668497438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:31 crc kubenswrapper[5110]: I0126 00:11:31.040394 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:31 crc kubenswrapper[5110]: E0126 00:11:31.040758 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:31.540743893 +0000 UTC m=+148.769642502 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:31 crc kubenswrapper[5110]: I0126 00:11:31.146605 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:31 crc kubenswrapper[5110]: E0126 00:11:31.147114 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:31.647085818 +0000 UTC m=+148.875984427 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:31 crc kubenswrapper[5110]: I0126 00:11:31.252572 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:31 crc kubenswrapper[5110]: E0126 00:11:31.252988 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:31.752968559 +0000 UTC m=+148.981867168 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:31 crc kubenswrapper[5110]: I0126 00:11:31.344857 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-c49nj"] Jan 26 00:11:31 crc kubenswrapper[5110]: I0126 00:11:31.356303 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:31 crc kubenswrapper[5110]: E0126 00:11:31.356763 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:31.856740939 +0000 UTC m=+149.085639548 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:31 crc kubenswrapper[5110]: I0126 00:11:31.465894 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:31 crc kubenswrapper[5110]: E0126 00:11:31.466436 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:31.96641699 +0000 UTC m=+149.195315599 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:31 crc kubenswrapper[5110]: I0126 00:11:31.573381 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:31 crc kubenswrapper[5110]: E0126 00:11:31.573903 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:32.073875027 +0000 UTC m=+149.302773636 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:31 crc kubenswrapper[5110]: I0126 00:11:31.635662 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bfssk" event={"ID":"be1e2506-51b5-4287-a99d-37df39a44ff7","Type":"ContainerStarted","Data":"299579b318de932fa1afcc001f5b5522c63d5f6d26d3040f378f506ebd4fc7c9"} Jan 26 00:11:31 crc kubenswrapper[5110]: I0126 00:11:31.649474 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-c49nj" event={"ID":"1e6d8af4-164c-4fc2-9032-596a1de18a11","Type":"ContainerStarted","Data":"d461290205ed6ede1d5a2f74f9cf7f120d0ae523d7bc54170ccee9bdbed0b4d6"} Jan 26 00:11:31 crc kubenswrapper[5110]: I0126 00:11:31.676837 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:31 crc kubenswrapper[5110]: E0126 00:11:31.677354 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:32.177335259 +0000 UTC m=+149.406233868 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:31 crc kubenswrapper[5110]: I0126 00:11:31.787852 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:31 crc kubenswrapper[5110]: E0126 00:11:31.788243 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:32.288218935 +0000 UTC m=+149.517117544 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:31 crc kubenswrapper[5110]: I0126 00:11:31.888941 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:31 crc kubenswrapper[5110]: E0126 00:11:31.890363 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:32.390337157 +0000 UTC m=+149.619235766 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:32 crc kubenswrapper[5110]: I0126 00:11:32.056043 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:32 crc kubenswrapper[5110]: E0126 00:11:32.056817 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:32.556777658 +0000 UTC m=+149.785676267 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:32 crc kubenswrapper[5110]: I0126 00:11:32.157774 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:32 crc kubenswrapper[5110]: E0126 00:11:32.158220 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:32.658205641 +0000 UTC m=+149.887104250 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:32 crc kubenswrapper[5110]: I0126 00:11:32.258804 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:32 crc kubenswrapper[5110]: E0126 00:11:32.259303 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:32.759274503 +0000 UTC m=+149.988173112 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:32 crc kubenswrapper[5110]: I0126 00:11:32.361005 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:32 crc kubenswrapper[5110]: E0126 00:11:32.361926 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:32.86190026 +0000 UTC m=+150.090799069 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:32 crc kubenswrapper[5110]: I0126 00:11:32.465133 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:32 crc kubenswrapper[5110]: E0126 00:11:32.466667 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:32.966623968 +0000 UTC m=+150.195522577 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:32 crc kubenswrapper[5110]: I0126 00:11:32.469255 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:32 crc kubenswrapper[5110]: E0126 00:11:32.469663 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:32.969654816 +0000 UTC m=+150.198553425 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:32 crc kubenswrapper[5110]: I0126 00:11:32.571160 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:32 crc kubenswrapper[5110]: E0126 00:11:32.571517 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:33.07148876 +0000 UTC m=+150.300387369 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:32 crc kubenswrapper[5110]: I0126 00:11:32.690171 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:32 crc kubenswrapper[5110]: E0126 00:11:32.690581 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:33.190563573 +0000 UTC m=+150.419462182 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:32 crc kubenswrapper[5110]: I0126 00:11:32.791642 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:32 crc kubenswrapper[5110]: E0126 00:11:32.792170 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:33.29214887 +0000 UTC m=+150.521047479 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:32 crc kubenswrapper[5110]: I0126 00:11:32.896185 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:32 crc kubenswrapper[5110]: E0126 00:11:32.896674 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:33.396656961 +0000 UTC m=+150.625555570 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:32 crc kubenswrapper[5110]: I0126 00:11:32.898612 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" event={"ID":"0cc5795e-df15-48c8-949a-7254633f27e3","Type":"ContainerStarted","Data":"a90df42997fc5ae7a9792254906cac7073ac3b1a4a33169586540e53013fa2c1"} Jan 26 00:11:32 crc kubenswrapper[5110]: I0126 00:11:32.901715 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-hvgtl" event={"ID":"8c8a6437-b0cb-4825-999f-6e523fd394e9","Type":"ContainerStarted","Data":"b142be6234ffd8caeb6021967a30cb598049539e85a0c7451710516ae5ff031a"} Jan 26 00:11:32 crc kubenswrapper[5110]: I0126 00:11:32.904423 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-hvgtl" Jan 26 00:11:32 crc kubenswrapper[5110]: I0126 00:11:32.919921 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-979dq" event={"ID":"c451c2d9-b218-4494-93e4-200ef1a5eb75","Type":"ContainerStarted","Data":"6cd27706473896f831f36fa9d0d3c7af00894ed1c8171b28d2ac86fda059e6a6"} Jan 26 00:11:32 crc kubenswrapper[5110]: I0126 00:11:32.928031 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-tgtdw" event={"ID":"1cea4c4b-a35b-4b39-a268-15234da1bc4a","Type":"ContainerStarted","Data":"09322fb2ac323c003c3a55fcc5efa023d7dab6116b7a54a9c6ea13ec7582a101"} Jan 26 00:11:32 crc kubenswrapper[5110]: I0126 00:11:32.933669 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-gpd7g" event={"ID":"765dcc91-6c2c-4e8e-a85d-b083d0a8e261","Type":"ContainerStarted","Data":"912bbfd7d6c056b4b99fe639f87b1eb47335b5a53a7b54b2209e143dcbc381f9"} Jan 26 00:11:32 crc kubenswrapper[5110]: I0126 00:11:32.938618 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-n929c" event={"ID":"49efa7f2-d990-4552-80cf-5d4a72a32ec7","Type":"ContainerStarted","Data":"de2bbffbb33a8b89665cf46113d36519f5f51811defae29479c7e9f076d8140f"} Jan 26 00:11:32 crc kubenswrapper[5110]: I0126 00:11:32.942321 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-gk4w9" event={"ID":"1b2941d0-33a9-475b-9ed9-5534ffda45c6","Type":"ContainerStarted","Data":"93e3db35e75de71f3ad11d2e42f2f7c0016ac9383ba6121ca614e8aae2aa40c6"} Jan 26 00:11:32 crc kubenswrapper[5110]: I0126 00:11:32.953548 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-hsvjl" event={"ID":"ecc19229-da38-4ff6-bc7a-d5864b4d1101","Type":"ContainerStarted","Data":"17f72f3347e66509ca5e780918d7ddcc2dcfece50db5e30edc5970006cf6c70d"} Jan 26 00:11:32 crc kubenswrapper[5110]: I0126 00:11:32.957847 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bfssk" podStartSLOduration=129.95783212 podStartE2EDuration="2m9.95783212s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:31.752577354 +0000 UTC m=+148.981475973" watchObservedRunningTime="2026-01-26 00:11:32.95783212 +0000 UTC m=+150.186730729" Jan 26 00:11:32 crc kubenswrapper[5110]: I0126 00:11:32.962609 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-cg4nl" event={"ID":"aaaae9da-7fe6-47c9-8a79-20343d5f6c43","Type":"ContainerStarted","Data":"c4e7099785a57972ccb9fadc4adb569a039372681ed1409b5123b44060a93353"} Jan 26 00:11:32 crc kubenswrapper[5110]: I0126 00:11:32.976545 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-hvgtl" podStartSLOduration=10.976527971 podStartE2EDuration="10.976527971s" podCreationTimestamp="2026-01-26 00:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:32.96025335 +0000 UTC m=+150.189151959" watchObservedRunningTime="2026-01-26 00:11:32.976527971 +0000 UTC m=+150.205426580" Jan 26 00:11:33 crc kubenswrapper[5110]: I0126 00:11:32.997483 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:33 crc kubenswrapper[5110]: E0126 00:11:32.997950 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:33.497911849 +0000 UTC m=+150.726810458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:33 crc kubenswrapper[5110]: I0126 00:11:32.998039 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:33 crc kubenswrapper[5110]: E0126 00:11:32.998506 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:33.498498096 +0000 UTC m=+150.727396705 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:33 crc kubenswrapper[5110]: I0126 00:11:33.072074 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-cg4nl" podStartSLOduration=130.072052813 podStartE2EDuration="2m10.072052813s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:33.068993544 +0000 UTC m=+150.297892163" watchObservedRunningTime="2026-01-26 00:11:33.072052813 +0000 UTC m=+150.300951422" Jan 26 00:11:33 crc kubenswrapper[5110]: I0126 00:11:33.099296 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:33 crc kubenswrapper[5110]: E0126 00:11:33.099422 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:33.599377393 +0000 UTC m=+150.828276012 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:33 crc kubenswrapper[5110]: I0126 00:11:33.100443 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:33 crc kubenswrapper[5110]: E0126 00:11:33.101901 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:33.601888495 +0000 UTC m=+150.830787104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:33 crc kubenswrapper[5110]: I0126 00:11:33.177304 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-n929c" Jan 26 00:11:33 crc kubenswrapper[5110]: I0126 00:11:33.201495 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:33 crc kubenswrapper[5110]: E0126 00:11:33.201785 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:33.701767723 +0000 UTC m=+150.930666332 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:33 crc kubenswrapper[5110]: I0126 00:11:33.219002 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-gpd7g" podStartSLOduration=11.218977701 podStartE2EDuration="11.218977701s" podCreationTimestamp="2026-01-26 00:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:33.1584395 +0000 UTC m=+150.387338109" watchObservedRunningTime="2026-01-26 00:11:33.218977701 +0000 UTC m=+150.447876300" Jan 26 00:11:33 crc kubenswrapper[5110]: I0126 00:11:33.219611 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-gk4w9" podStartSLOduration=130.219607049 podStartE2EDuration="2m10.219607049s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:33.215818139 +0000 UTC m=+150.444716748" watchObservedRunningTime="2026-01-26 00:11:33.219607049 +0000 UTC m=+150.448505658" Jan 26 00:11:33 crc kubenswrapper[5110]: I0126 00:11:33.239017 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-tjq6l" Jan 26 00:11:33 crc kubenswrapper[5110]: I0126 00:11:33.239114 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-tjq6l" Jan 26 00:11:33 crc kubenswrapper[5110]: I0126 00:11:33.244845 5110 patch_prober.go:28] interesting pod/console-64d44f6ddf-tjq6l container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 26 00:11:33 crc kubenswrapper[5110]: I0126 00:11:33.244920 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-tjq6l" podUID="766da705-879b-4d57-a34d-43eca4c9da19" containerName="console" probeResult="failure" output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 26 00:11:33 crc kubenswrapper[5110]: I0126 00:11:33.265869 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-hsvjl" podStartSLOduration=130.265849976 podStartE2EDuration="2m10.265849976s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:33.26529366 +0000 UTC m=+150.494192279" watchObservedRunningTime="2026-01-26 00:11:33.265849976 +0000 UTC m=+150.494748595" Jan 26 00:11:33 crc kubenswrapper[5110]: I0126 00:11:33.266688 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-n929c" podStartSLOduration=130.26668085 podStartE2EDuration="2m10.26668085s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:33.240150663 +0000 UTC m=+150.469049292" watchObservedRunningTime="2026-01-26 00:11:33.26668085 +0000 UTC m=+150.495579479" Jan 26 00:11:33 crc kubenswrapper[5110]: I0126 00:11:33.294072 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-hsvjl" Jan 26 00:11:33 crc kubenswrapper[5110]: I0126 00:11:33.294143 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-hsvjl" Jan 26 00:11:33 crc kubenswrapper[5110]: I0126 00:11:33.295554 5110 patch_prober.go:28] interesting pod/apiserver-8596bd845d-hsvjl container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.9:8443/livez\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 26 00:11:33 crc kubenswrapper[5110]: I0126 00:11:33.295660 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-8596bd845d-hsvjl" podUID="ecc19229-da38-4ff6-bc7a-d5864b4d1101" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.9:8443/livez\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 26 00:11:33 crc kubenswrapper[5110]: I0126 00:11:33.302854 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:33 crc kubenswrapper[5110]: E0126 00:11:33.304167 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:33.804150833 +0000 UTC m=+151.033049442 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:33 crc kubenswrapper[5110]: I0126 00:11:33.404211 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:33 crc kubenswrapper[5110]: E0126 00:11:33.404450 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:33.904411112 +0000 UTC m=+151.133309721 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:33 crc kubenswrapper[5110]: I0126 00:11:33.404572 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:33 crc kubenswrapper[5110]: E0126 00:11:33.404917 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:33.904910437 +0000 UTC m=+151.133809046 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:33 crc kubenswrapper[5110]: I0126 00:11:33.505687 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:33 crc kubenswrapper[5110]: E0126 00:11:33.505966 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:34.005920667 +0000 UTC m=+151.234819276 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:33 crc kubenswrapper[5110]: I0126 00:11:33.506174 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:33 crc kubenswrapper[5110]: E0126 00:11:33.506685 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:34.006655368 +0000 UTC m=+151.235553977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:33 crc kubenswrapper[5110]: I0126 00:11:33.628740 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:33 crc kubenswrapper[5110]: E0126 00:11:33.629215 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:34.129188431 +0000 UTC m=+151.358087030 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:33 crc kubenswrapper[5110]: I0126 00:11:33.673335 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n929c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:11:33 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 26 00:11:33 crc kubenswrapper[5110]: [+]process-running ok Jan 26 00:11:33 crc kubenswrapper[5110]: healthz check failed Jan 26 00:11:33 crc kubenswrapper[5110]: I0126 00:11:33.673654 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n929c" podUID="49efa7f2-d990-4552-80cf-5d4a72a32ec7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:11:33 crc kubenswrapper[5110]: I0126 00:11:33.731201 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:33 crc kubenswrapper[5110]: E0126 00:11:33.731930 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:34.231916621 +0000 UTC m=+151.460815230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:33 crc kubenswrapper[5110]: I0126 00:11:33.837201 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:33 crc kubenswrapper[5110]: E0126 00:11:33.837426 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:34.337392611 +0000 UTC m=+151.566291220 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:33 crc kubenswrapper[5110]: I0126 00:11:33.837854 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:33 crc kubenswrapper[5110]: E0126 00:11:33.838207 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:34.338196944 +0000 UTC m=+151.567095553 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:33 crc kubenswrapper[5110]: I0126 00:11:33.903305 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-h4rkf"] Jan 26 00:11:33 crc kubenswrapper[5110]: I0126 00:11:33.933053 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-67svm"] Jan 26 00:11:33 crc kubenswrapper[5110]: I0126 00:11:33.940723 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:33 crc kubenswrapper[5110]: E0126 00:11:33.941097 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:34.441069388 +0000 UTC m=+151.669967997 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:34 crc kubenswrapper[5110]: I0126 00:11:34.074657 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:34 crc kubenswrapper[5110]: E0126 00:11:34.075316 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:34.575295009 +0000 UTC m=+151.804193618 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:34 crc kubenswrapper[5110]: I0126 00:11:34.077692 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-rwwx4"] Jan 26 00:11:34 crc kubenswrapper[5110]: I0126 00:11:34.122609 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-tgtdw" event={"ID":"1cea4c4b-a35b-4b39-a268-15234da1bc4a","Type":"ContainerStarted","Data":"edfb03eb463dfe26f03797ed62fd583d811b6536242abb234807e003f41988f5"} Jan 26 00:11:34 crc kubenswrapper[5110]: I0126 00:11:34.125261 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-g57zz"] Jan 26 00:11:34 crc kubenswrapper[5110]: I0126 00:11:34.125312 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-7fnwf"] Jan 26 00:11:34 crc kubenswrapper[5110]: I0126 00:11:34.128787 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-67svm" event={"ID":"3bcb987d-ffcc-436f-b7fb-75a4d2e4952d","Type":"ContainerStarted","Data":"e34bf1b46bf5b45f8f4270d301d6224b4023afe05e519a5415d173e4498f2c23"} Jan 26 00:11:34 crc kubenswrapper[5110]: I0126 00:11:34.130401 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-c49nj" event={"ID":"1e6d8af4-164c-4fc2-9032-596a1de18a11","Type":"ContainerStarted","Data":"8b63111ea9543ad60880f6de89c814fe6e7a235abb2e654bd8d708823d68a9f4"} Jan 26 00:11:34 crc kubenswrapper[5110]: I0126 00:11:34.155306 5110 generic.go:358] "Generic (PLEG): container finished" podID="c451c2d9-b218-4494-93e4-200ef1a5eb75" containerID="6cd27706473896f831f36fa9d0d3c7af00894ed1c8171b28d2ac86fda059e6a6" exitCode=0 Jan 26 00:11:34 crc kubenswrapper[5110]: I0126 00:11:34.155445 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-979dq" event={"ID":"c451c2d9-b218-4494-93e4-200ef1a5eb75","Type":"ContainerDied","Data":"6cd27706473896f831f36fa9d0d3c7af00894ed1c8171b28d2ac86fda059e6a6"} Jan 26 00:11:34 crc kubenswrapper[5110]: I0126 00:11:34.181476 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:34 crc kubenswrapper[5110]: E0126 00:11:34.181759 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:34.681735897 +0000 UTC m=+151.910634496 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:34 crc kubenswrapper[5110]: I0126 00:11:34.282471 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:34 crc kubenswrapper[5110]: E0126 00:11:34.282878 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:34.782859981 +0000 UTC m=+152.011758590 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:34 crc kubenswrapper[5110]: I0126 00:11:34.290475 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29489760-jjbnv"] Jan 26 00:11:34 crc kubenswrapper[5110]: I0126 00:11:34.290578 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489760-vd2k2"] Jan 26 00:11:34 crc kubenswrapper[5110]: I0126 00:11:34.297180 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-77449"] Jan 26 00:11:34 crc kubenswrapper[5110]: I0126 00:11:34.299286 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-pcrzq"] Jan 26 00:11:34 crc kubenswrapper[5110]: I0126 00:11:34.315179 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-hvgtl" Jan 26 00:11:34 crc kubenswrapper[5110]: I0126 00:11:34.385834 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:34 crc kubenswrapper[5110]: E0126 00:11:34.387011 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:34.886985061 +0000 UTC m=+152.115883670 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:34 crc kubenswrapper[5110]: I0126 00:11:34.562162 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:34 crc kubenswrapper[5110]: E0126 00:11:34.563086 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:35.063050222 +0000 UTC m=+152.291948821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:34 crc kubenswrapper[5110]: I0126 00:11:34.587392 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n929c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:11:34 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 26 00:11:34 crc kubenswrapper[5110]: [+]process-running ok Jan 26 00:11:34 crc kubenswrapper[5110]: healthz check failed Jan 26 00:11:34 crc kubenswrapper[5110]: I0126 00:11:34.587488 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n929c" podUID="49efa7f2-d990-4552-80cf-5d4a72a32ec7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:11:34 crc kubenswrapper[5110]: W0126 00:11:34.638015 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16e64d17_7476_4caf_af16_73d93e2c1085.slice/crio-95660ee2ea98453bbebae2c472d742ca6cfb62abfbe5312000fbad20a151bfee WatchSource:0}: Error finding container 95660ee2ea98453bbebae2c472d742ca6cfb62abfbe5312000fbad20a151bfee: Status 404 returned error can't find the container with id 95660ee2ea98453bbebae2c472d742ca6cfb62abfbe5312000fbad20a151bfee Jan 26 00:11:34 crc kubenswrapper[5110]: I0126 00:11:34.663975 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:34 crc kubenswrapper[5110]: E0126 00:11:34.664283 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:35.164254838 +0000 UTC m=+152.393153447 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:34 crc kubenswrapper[5110]: I0126 00:11:34.778442 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:34 crc kubenswrapper[5110]: E0126 00:11:34.779010 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:35.278990735 +0000 UTC m=+152.507889354 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:34 crc kubenswrapper[5110]: I0126 00:11:34.886673 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:34 crc kubenswrapper[5110]: E0126 00:11:34.887085 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:35.38705949 +0000 UTC m=+152.615958099 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:34 crc kubenswrapper[5110]: I0126 00:11:34.988430 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:34 crc kubenswrapper[5110]: E0126 00:11:34.988860 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:35.488846533 +0000 UTC m=+152.717745142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.031725 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-2dt85"] Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.055692 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-99xsn"] Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.057649 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-8ndzr"] Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.069916 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-bjwbh"] Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.091598 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:35 crc kubenswrapper[5110]: E0126 00:11:35.091954 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:35.591906293 +0000 UTC m=+152.820804902 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:35 crc kubenswrapper[5110]: W0126 00:11:35.102155 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ba4d5a4_755b_4d6e_b250_cd705b244775.slice/crio-f5fdb38cd83bd1918db414123327b2499349b9dcb1e86f6c1f6eb742753e87ca WatchSource:0}: Error finding container f5fdb38cd83bd1918db414123327b2499349b9dcb1e86f6c1f6eb742753e87ca: Status 404 returned error can't find the container with id f5fdb38cd83bd1918db414123327b2499349b9dcb1e86f6c1f6eb742753e87ca Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.152143 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-55st2"] Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.164142 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-t49fp"] Jan 26 00:11:35 crc kubenswrapper[5110]: W0126 00:11:35.182000 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f875209_3895_465c_9992_37ef91c4dda9.slice/crio-351e6513ebdf2957733ec63d773f98c8c72efec8065b2d225775c685d866c0e8 WatchSource:0}: Error finding container 351e6513ebdf2957733ec63d773f98c8c72efec8065b2d225775c685d866c0e8: Status 404 returned error can't find the container with id 351e6513ebdf2957733ec63d773f98c8c72efec8065b2d225775c685d866c0e8 Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.182346 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-bqdwj"] Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.185862 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4j4dd"] Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.190156 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-vl2cl"] Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.192460 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n929c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:11:35 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 26 00:11:35 crc kubenswrapper[5110]: [+]process-running ok Jan 26 00:11:35 crc kubenswrapper[5110]: healthz check failed Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.192532 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n929c" podUID="49efa7f2-d990-4552-80cf-5d4a72a32ec7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:11:35 crc kubenswrapper[5110]: W0126 00:11:35.192952 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17b87002_b798_480a_8e17_83053d698239.slice/crio-ddaa0d7ea8a588281ae942b2558279f27090adfbb24392d2a1e9635cc10e79a5 WatchSource:0}: Error finding container ddaa0d7ea8a588281ae942b2558279f27090adfbb24392d2a1e9635cc10e79a5: Status 404 returned error can't find the container with id ddaa0d7ea8a588281ae942b2558279f27090adfbb24392d2a1e9635cc10e79a5 Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.193049 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-c5l6l"] Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.194787 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.194821 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-l6gvr"] Jan 26 00:11:35 crc kubenswrapper[5110]: E0126 00:11:35.195124 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:35.695110396 +0000 UTC m=+152.924009005 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.198705 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-qknx7"] Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.198763 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-crnjv"] Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.223159 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-jptld"] Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.227878 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-8ndzr" event={"ID":"040d3d5f-c02a-4a70-92af-70700fd9e3c3","Type":"ContainerStarted","Data":"d9fdba4cd41d246d86e495f9bd0601afd04f222aa7171e250413b5a18bc5272d"} Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.230260 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-pcrzq" event={"ID":"e17b81d4-9c4f-44ee-acaa-e008890ca2fd","Type":"ContainerStarted","Data":"b6ad1290ce037a91740809d9a669d170a6eafd538ee98a7d3a5509c583782038"} Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.241129 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29489760-jjbnv" event={"ID":"497eadd3-187e-4dfb-82c3-fad0d59eb723","Type":"ContainerStarted","Data":"f2fab06e6d0b2fc11f06b3b87a963bf855598c797480bd56210cec0407b96dda"} Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.244198 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-hvgtl"] Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.245493 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-8chxh"] Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.271664 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-c49nj" event={"ID":"1e6d8af4-164c-4fc2-9032-596a1de18a11","Type":"ContainerStarted","Data":"2b0d0558e03010f3c98e80c4484bcfd7a16227d234ea4f763cdadb3207b34ffa"} Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.281942 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-g57zz" event={"ID":"d7cb0851-2110-44f5-b3fc-77478d1fd49f","Type":"ContainerStarted","Data":"bc9dda55e33bc3e8c57d0d97853dd1b0c04b494d2f31cb3a540e8462b6b491af"} Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.298728 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-7fnwf" event={"ID":"16e64d17-7476-4caf-af16-73d93e2c1085","Type":"ContainerStarted","Data":"4cc1cea61669bb9c1fa8f9cc517fbaefbbc9ca21389a8fe3e37ead2547060021"} Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.298806 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-7fnwf" event={"ID":"16e64d17-7476-4caf-af16-73d93e2c1085","Type":"ContainerStarted","Data":"95660ee2ea98453bbebae2c472d742ca6cfb62abfbe5312000fbad20a151bfee"} Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.299877 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:35 crc kubenswrapper[5110]: E0126 00:11:35.300263 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:35.800234356 +0000 UTC m=+153.029132965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.304277 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-bqdwj" event={"ID":"24efc76d-eb4f-4036-804e-a71705eb1a78","Type":"ContainerStarted","Data":"3c80cf78077209f8f3354ff19b910e169beeec4920496a29820b4d31bcc00b3d"} Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.308449 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pzlvv"] Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.313624 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rwwx4" event={"ID":"e2948eb5-9117-467a-89c9-742dde24b958","Type":"ContainerStarted","Data":"63f944f750921c4c775a9e258a403b0feca13b2157d6fcefc5ce5d566614db94"} Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.313845 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rwwx4" event={"ID":"e2948eb5-9117-467a-89c9-742dde24b958","Type":"ContainerStarted","Data":"12893f4213c5e6b3cd62e5343e75bee6615e0a433cae719ada046543045c1b48"} Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.329937 5110 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-7fnwf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:5443/healthz\": dial tcp 10.217.0.36:5443: connect: connection refused" start-of-body= Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.330011 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-7fnwf" podUID="16e64d17-7476-4caf-af16-73d93e2c1085" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.36:5443/healthz\": dial tcp 10.217.0.36:5443: connect: connection refused" Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.330054 5110 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-rwwx4 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.330154 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rwwx4" podUID="e2948eb5-9117-467a-89c9-742dde24b958" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.464005 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:35 crc kubenswrapper[5110]: E0126 00:11:35.464464 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:35.964450424 +0000 UTC m=+153.193349033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.509059 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2dt85" event={"ID":"d1eb870e-2fb9-4e38-94b6-80fb61ed6ad3","Type":"ContainerStarted","Data":"b5e5e92ace993013fcd84861c8a09ab647a0612a743096f20824c818ed73a59c"} Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.509344 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rwwx4" Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.509375 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-7fnwf" Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.509412 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-vd2k2" event={"ID":"defa6a4f-7d53-4d4c-b0c7-cfb7489db02c","Type":"ContainerStarted","Data":"8c21a3978a4ab96a80b573d145515e89e5dc244a84e0ab99b021ed38e771da48"} Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.509440 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-vd2k2" event={"ID":"defa6a4f-7d53-4d4c-b0c7-cfb7489db02c","Type":"ContainerStarted","Data":"a245e20966aae520bf81600acdc7f29e989573d9f8ac297e37d591d0078c3fc2"} Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.521692 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-77449" event={"ID":"2438617c-3b33-4b33-971c-afaab481cfe6","Type":"ContainerStarted","Data":"30a83e41dc7bdfff142040fa8a430e86a148f137d567e3315e736f8845262538"} Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.530198 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-77449" Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.538772 5110 patch_prober.go:28] interesting pod/downloads-747b44746d-77449 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.538882 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-77449" podUID="2438617c-3b33-4b33-971c-afaab481cfe6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.554165 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" event={"ID":"0cc5795e-df15-48c8-949a-7254633f27e3","Type":"ContainerStarted","Data":"c01a7ea41960ae0a60610b291685facfbe0a85555d5a62cc297a66befaf39870"} Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.566503 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:35 crc kubenswrapper[5110]: E0126 00:11:35.568303 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:36.068275395 +0000 UTC m=+153.297174004 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.582009 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-979dq" event={"ID":"c451c2d9-b218-4494-93e4-200ef1a5eb75","Type":"ContainerStarted","Data":"8b3d57242438914e4d0c1553260a670ca41c1f5c5c7364533bb85d4c8e6911ee"} Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.584581 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-979dq" Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.587206 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-bjwbh" event={"ID":"4ba4d5a4-755b-4d6e-b250-cd705b244775","Type":"ContainerStarted","Data":"f5fdb38cd83bd1918db414123327b2499349b9dcb1e86f6c1f6eb742753e87ca"} Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.600139 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-67svm" event={"ID":"3bcb987d-ffcc-436f-b7fb-75a4d2e4952d","Type":"ContainerStarted","Data":"1cf689f2f03b6372a05b780200e88e2988461ad8d613b213c4ade9521fab62f8"} Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.668583 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:35 crc kubenswrapper[5110]: E0126 00:11:35.670358 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:36.170339566 +0000 UTC m=+153.399238175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.703088 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-67svm" Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.709546 5110 patch_prober.go:28] interesting pod/console-operator-67c89758df-67svm container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/readyz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.709663 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-67svm" podUID="3bcb987d-ffcc-436f-b7fb-75a4d2e4952d" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/readyz\": dial tcp 10.217.0.16:8443: connect: connection refused" Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.721755 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rwwx4" podStartSLOduration=132.721724471 podStartE2EDuration="2m12.721724471s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:35.538893756 +0000 UTC m=+152.767792385" watchObservedRunningTime="2026-01-26 00:11:35.721724471 +0000 UTC m=+152.950623080" Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.722062 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"00e8c76c024c7586a7a1bd577b1d311e88e4a829207c16604bbbc1bc64bc0bcc"} Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.764785 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-99xsn" event={"ID":"ce4b4a91-4053-4fb9-be7e-ac51e71a829e","Type":"ContainerStarted","Data":"d7b06511a73ede390e5388a2b217b53ba115dba5eb5881f92875dd6dfe94e0d4"} Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.766663 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-7fnwf" podStartSLOduration=132.76663792 podStartE2EDuration="2m12.76663792s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:35.765473056 +0000 UTC m=+152.994371665" watchObservedRunningTime="2026-01-26 00:11:35.76663792 +0000 UTC m=+152.995536529" Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.767045 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-h4rkf" event={"ID":"3a884209-147f-4786-9a92-5eb1a685dc1b","Type":"ContainerStarted","Data":"4eb1fe6d0a017537aec60131e84ca7ea6ed6843d4ee864cf00ead5a49e6a81ea"} Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.767077 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-h4rkf" event={"ID":"3a884209-147f-4786-9a92-5eb1a685dc1b","Type":"ContainerStarted","Data":"3a041b1c6b318faa8385f14bf9d4d2f3dcb111798abc5f91151a7237c74de80f"} Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.783093 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:35 crc kubenswrapper[5110]: E0126 00:11:35.784720 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:36.284684912 +0000 UTC m=+153.513583531 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.792424 5110 patch_prober.go:28] interesting pod/downloads-747b44746d-77449 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.792501 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-77449" podUID="2438617c-3b33-4b33-971c-afaab481cfe6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.890601 5110 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-7fnwf container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.36:5443/healthz\": dial tcp 10.217.0.36:5443: connect: connection refused" start-of-body= Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.890701 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-7fnwf" podUID="16e64d17-7476-4caf-af16-73d93e2c1085" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.36:5443/healthz\": dial tcp 10.217.0.36:5443: connect: connection refused" Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.891957 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:35 crc kubenswrapper[5110]: E0126 00:11:35.892878 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:36.392855049 +0000 UTC m=+153.621753658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.925095 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-c49nj" podStartSLOduration=132.925067831 podStartE2EDuration="2m12.925067831s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:35.892666284 +0000 UTC m=+153.121564913" watchObservedRunningTime="2026-01-26 00:11:35.925067831 +0000 UTC m=+153.153966440" Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.927917 5110 patch_prober.go:28] interesting pod/console-operator-67c89758df-67svm container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.927995 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-67c89758df-67svm" podUID="3bcb987d-ffcc-436f-b7fb-75a4d2e4952d" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Jan 26 00:11:35 crc kubenswrapper[5110]: I0126 00:11:35.928219 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.003740 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.004445 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-vd2k2" podStartSLOduration=133.004421755 podStartE2EDuration="2m13.004421755s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:35.95685884 +0000 UTC m=+153.185757469" watchObservedRunningTime="2026-01-26 00:11:36.004421755 +0000 UTC m=+153.233320364" Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.005196 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 26 00:11:36 crc kubenswrapper[5110]: E0126 00:11:36.006039 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:36.506018531 +0000 UTC m=+153.734917130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.006102 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.037082 5110 ???:1] "http: TLS handshake error from 192.168.126.11:45142: no serving certificate available for the kubelet" Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.038102 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.038350 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.051122 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-77449" podStartSLOduration=133.051098164 podStartE2EDuration="2m13.051098164s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:36.050007993 +0000 UTC m=+153.278906602" watchObservedRunningTime="2026-01-26 00:11:36.051098164 +0000 UTC m=+153.279996773" Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.078905 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-979dq" podStartSLOduration=133.078886678 podStartE2EDuration="2m13.078886678s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:36.077525149 +0000 UTC m=+153.306423758" watchObservedRunningTime="2026-01-26 00:11:36.078886678 +0000 UTC m=+153.307785287" Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.099381 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-tgtdw" podStartSLOduration=133.09936404 podStartE2EDuration="2m13.09936404s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:36.098744522 +0000 UTC m=+153.327643131" watchObservedRunningTime="2026-01-26 00:11:36.09936404 +0000 UTC m=+153.328262649" Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.105882 5110 ???:1] "http: TLS handshake error from 192.168.126.11:45148: no serving certificate available for the kubelet" Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.214667 5110 ???:1] "http: TLS handshake error from 192.168.126.11:45158: no serving certificate available for the kubelet" Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.215856 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.215915 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/772c8d38-81bf-4f54-b995-e740d2056ead-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"772c8d38-81bf-4f54-b995-e740d2056ead\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.215935 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/772c8d38-81bf-4f54-b995-e740d2056ead-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"772c8d38-81bf-4f54-b995-e740d2056ead\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 26 00:11:36 crc kubenswrapper[5110]: E0126 00:11:36.216209 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:36.716196908 +0000 UTC m=+153.945095517 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.217829 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-n929c" Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.238059 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" podStartSLOduration=133.238029139 podStartE2EDuration="2m13.238029139s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:36.124440115 +0000 UTC m=+153.353338724" watchObservedRunningTime="2026-01-26 00:11:36.238029139 +0000 UTC m=+153.466927748" Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.239589 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n929c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:11:36 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 26 00:11:36 crc kubenswrapper[5110]: [+]process-running ok Jan 26 00:11:36 crc kubenswrapper[5110]: healthz check failed Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.239844 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n929c" podUID="49efa7f2-d990-4552-80cf-5d4a72a32ec7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.281433 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-67svm" podStartSLOduration=133.281407253 podStartE2EDuration="2m13.281407253s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:36.239290526 +0000 UTC m=+153.468189135" watchObservedRunningTime="2026-01-26 00:11:36.281407253 +0000 UTC m=+153.510305862" Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.318334 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.319224 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/772c8d38-81bf-4f54-b995-e740d2056ead-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"772c8d38-81bf-4f54-b995-e740d2056ead\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.319284 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/772c8d38-81bf-4f54-b995-e740d2056ead-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"772c8d38-81bf-4f54-b995-e740d2056ead\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 26 00:11:36 crc kubenswrapper[5110]: E0126 00:11:36.320270 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:36.820237816 +0000 UTC m=+154.049136435 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.320345 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/772c8d38-81bf-4f54-b995-e740d2056ead-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"772c8d38-81bf-4f54-b995-e740d2056ead\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.420714 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:36 crc kubenswrapper[5110]: E0126 00:11:36.421366 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:36.921351009 +0000 UTC m=+154.150249618 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.487962 5110 ???:1] "http: TLS handshake error from 192.168.126.11:45160: no serving certificate available for the kubelet" Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.497112 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/772c8d38-81bf-4f54-b995-e740d2056ead-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"772c8d38-81bf-4f54-b995-e740d2056ead\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.524114 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:36 crc kubenswrapper[5110]: E0126 00:11:36.524522 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:37.024488512 +0000 UTC m=+154.253387111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.539172 5110 ???:1] "http: TLS handshake error from 192.168.126.11:45166: no serving certificate available for the kubelet" Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.567616 5110 ???:1] "http: TLS handshake error from 192.168.126.11:45182: no serving certificate available for the kubelet" Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.626474 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:36 crc kubenswrapper[5110]: E0126 00:11:36.626978 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:37.126959324 +0000 UTC m=+154.355857933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.727681 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:36 crc kubenswrapper[5110]: E0126 00:11:36.728440 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:37.228408777 +0000 UTC m=+154.457307386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.800248 5110 ???:1] "http: TLS handshake error from 192.168.126.11:45196: no serving certificate available for the kubelet" Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.818624 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-77449" event={"ID":"2438617c-3b33-4b33-971c-afaab481cfe6","Type":"ContainerStarted","Data":"220cf5974f7df703eac4064ef3d7f6ca6b6beacc0853208814acf86c2f43157b"} Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.820127 5110 patch_prober.go:28] interesting pod/downloads-747b44746d-77449 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.820181 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-77449" podUID="2438617c-3b33-4b33-971c-afaab481cfe6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.828844 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.830402 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:36 crc kubenswrapper[5110]: E0126 00:11:36.831010 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:37.330990183 +0000 UTC m=+154.559888792 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.837471 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-l6gvr" event={"ID":"0be01a81-9d33-487a-abf9-0284a7b3f24b","Type":"ContainerStarted","Data":"56d99cbd2f16d3d39f061640590e357b49ca61f247e55e78a0d5f09b6761b94e"} Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.846100 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-qknx7" event={"ID":"925904bd-cb57-4356-a6df-e72bd716398b","Type":"ContainerStarted","Data":"df267ec3affd6a3000f30961916d12fb60f7c339c09364841a8ea2d3da907a4e"} Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.851442 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-c5l6l" event={"ID":"6b4c71ca-e2f6-48d9-832a-1eca4fbcd9f3","Type":"ContainerStarted","Data":"c5cc1ae5fc631abc319326d386cf0dc6d6110a295e9ca4b812755e3d284d7ef2"} Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.870610 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t49fp" event={"ID":"ddb3d5eb-9401-4032-9bcd-c2f798fbaf51","Type":"ContainerStarted","Data":"438d9a80bdbaa509d1123c5f920dbea7c42a1edcff32426c957d85b83d244b86"} Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.882886 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-8chxh" event={"ID":"1d983c82-449b-4550-986f-5d378560b332","Type":"ContainerStarted","Data":"8aeff14c184e8004cd47fc131caf0ad0637c9c9ffdd2416d3374c64a56aa864c"} Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.884099 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-pcrzq" event={"ID":"e17b81d4-9c4f-44ee-acaa-e008890ca2fd","Type":"ContainerStarted","Data":"712ba8c886e08c43f3f480b9be3bb9ea720caa035e195d082f7ba8f7021616aa"} Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.886109 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-pcrzq" Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.887876 5110 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-pcrzq container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.887929 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-pcrzq" podUID="e17b81d4-9c4f-44ee-acaa-e008890ca2fd" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.889979 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29489760-jjbnv" event={"ID":"497eadd3-187e-4dfb-82c3-fad0d59eb723","Type":"ContainerStarted","Data":"4900e5e2cbcf3819d08be0339eb469a67850f5e241a73bd277e5ef98a14c61ab"} Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.891328 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4j4dd" event={"ID":"9cf38d4d-d415-4e81-85c3-bb2c65f7423b","Type":"ContainerStarted","Data":"81b07cf7fb6e0e828f0e33e7f7c54f62a37777a863b6f9886b632df2c530ff3a"} Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.892477 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-crnjv" event={"ID":"77aa983a-f3c1-4799-a84b-c3c7a381a1bc","Type":"ContainerStarted","Data":"e17d60253095b56917090e43a057934b9ead96abd2f937d1d2c22b3c7f8ec1ca"} Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.893517 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-55st2" event={"ID":"5f875209-3895-465c-9992-37ef91c4dda9","Type":"ContainerStarted","Data":"351e6513ebdf2957733ec63d773f98c8c72efec8065b2d225775c685d866c0e8"} Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.894172 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"ddaa0d7ea8a588281ae942b2558279f27090adfbb24392d2a1e9635cc10e79a5"} Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.894870 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-vl2cl" event={"ID":"3451ac5b-8ad7-4419-b262-ec54012b9dc6","Type":"ContainerStarted","Data":"f047dfb196988bc2577794c02d4dc847a2a18205246e2d678ff9d0dd9a898403"} Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.895772 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-99xsn" event={"ID":"ce4b4a91-4053-4fb9-be7e-ac51e71a829e","Type":"ContainerStarted","Data":"118f72575bfe902aac7b98120a66aa2cd3de2e6cbee4b0be5f30c5e1e286c2e1"} Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.896463 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-99xsn" Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.897263 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"7efb7ac69ccc7eb48724e8c4c1a8e94fa78baf4d43ed42be08d0539b625f26bd"} Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.897953 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-jptld" event={"ID":"10d2b7d6-0a18-4f5c-b3b8-2c6f34e88730","Type":"ContainerStarted","Data":"f343fc2ab4f6879060eef3f55d2d74d4a5d18287c39f0b1dada9ae276c2d4318"} Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.900457 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pzlvv" event={"ID":"4464550b-e8e5-4f07-8e54-40eb792eb201","Type":"ContainerStarted","Data":"08608623ebc99e9bd9ac0fe589c7ab992640c3ce325165999a3ccd99764b3013"} Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.904140 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-g57zz" event={"ID":"d7cb0851-2110-44f5-b3fc-77478d1fd49f","Type":"ContainerStarted","Data":"4f01140c195e2ca82e8fae3c20256f38d9a11a9ec6fa4b34c36311029e0c9ab2"} Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.942973 5110 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-99xsn container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.44:8443/healthz\": dial tcp 10.217.0.44:8443: connect: connection refused" start-of-body= Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.943056 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-99xsn" podUID="ce4b4a91-4053-4fb9-be7e-ac51e71a829e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.44:8443/healthz\": dial tcp 10.217.0.44:8443: connect: connection refused" Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.943673 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:36 crc kubenswrapper[5110]: E0126 00:11:36.946210 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:37.446175094 +0000 UTC m=+154.675073703 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.978699 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2dt85" event={"ID":"d1eb870e-2fb9-4e38-94b6-80fb61ed6ad3","Type":"ContainerStarted","Data":"0a4485cf67e3f5ed6c1c562348fd26f85fce4c51e535bc2719419e5c89247de5"} Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.979308 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-pcrzq" podStartSLOduration=133.979173618 podStartE2EDuration="2m13.979173618s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:36.973218156 +0000 UTC m=+154.202116765" watchObservedRunningTime="2026-01-26 00:11:36.979173618 +0000 UTC m=+154.208072227" Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.996849 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-hvgtl" podUID="8c8a6437-b0cb-4825-999f-6e523fd394e9" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://b142be6234ffd8caeb6021967a30cb598049539e85a0c7451710516ae5ff031a" gracePeriod=30 Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.997834 5110 patch_prober.go:28] interesting pod/console-operator-67c89758df-67svm container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/readyz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Jan 26 00:11:36 crc kubenswrapper[5110]: I0126 00:11:36.997868 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-67svm" podUID="3bcb987d-ffcc-436f-b7fb-75a4d2e4952d" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/readyz\": dial tcp 10.217.0.16:8443: connect: connection refused" Jan 26 00:11:37 crc kubenswrapper[5110]: I0126 00:11:37.056593 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:37 crc kubenswrapper[5110]: E0126 00:11:37.058476 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:37.55846142 +0000 UTC m=+154.787360029 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:37 crc kubenswrapper[5110]: I0126 00:11:37.102426 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-99xsn" podStartSLOduration=134.102405961 podStartE2EDuration="2m14.102405961s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:37.10169476 +0000 UTC m=+154.330593379" watchObservedRunningTime="2026-01-26 00:11:37.102405961 +0000 UTC m=+154.331304570" Jan 26 00:11:37 crc kubenswrapper[5110]: I0126 00:11:37.165128 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:37 crc kubenswrapper[5110]: E0126 00:11:37.165356 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:37.66531919 +0000 UTC m=+154.894217799 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:37 crc kubenswrapper[5110]: I0126 00:11:37.166090 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:37 crc kubenswrapper[5110]: E0126 00:11:37.168915 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:37.668902764 +0000 UTC m=+154.897801373 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:37 crc kubenswrapper[5110]: I0126 00:11:37.192201 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n929c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:11:37 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 26 00:11:37 crc kubenswrapper[5110]: [+]process-running ok Jan 26 00:11:37 crc kubenswrapper[5110]: healthz check failed Jan 26 00:11:37 crc kubenswrapper[5110]: I0126 00:11:37.192289 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n929c" podUID="49efa7f2-d990-4552-80cf-5d4a72a32ec7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:11:37 crc kubenswrapper[5110]: I0126 00:11:37.205391 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2dt85" podStartSLOduration=134.205360408 podStartE2EDuration="2m14.205360408s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:37.203401471 +0000 UTC m=+154.432300110" watchObservedRunningTime="2026-01-26 00:11:37.205360408 +0000 UTC m=+154.434259027" Jan 26 00:11:37 crc kubenswrapper[5110]: I0126 00:11:37.205942 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29489760-jjbnv" podStartSLOduration=134.205934974 podStartE2EDuration="2m14.205934974s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:37.142679995 +0000 UTC m=+154.371578604" watchObservedRunningTime="2026-01-26 00:11:37.205934974 +0000 UTC m=+154.434833583" Jan 26 00:11:37 crc kubenswrapper[5110]: I0126 00:11:37.288169 5110 ???:1] "http: TLS handshake error from 192.168.126.11:45204: no serving certificate available for the kubelet" Jan 26 00:11:37 crc kubenswrapper[5110]: I0126 00:11:37.321905 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:37 crc kubenswrapper[5110]: E0126 00:11:37.322420 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:37.822390571 +0000 UTC m=+155.051289180 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:37 crc kubenswrapper[5110]: I0126 00:11:37.423607 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:37 crc kubenswrapper[5110]: E0126 00:11:37.423961 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:37.923948117 +0000 UTC m=+155.152846716 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:37 crc kubenswrapper[5110]: I0126 00:11:37.550898 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:37 crc kubenswrapper[5110]: E0126 00:11:37.551537 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:38.051508256 +0000 UTC m=+155.280406865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:37 crc kubenswrapper[5110]: I0126 00:11:37.659839 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:37 crc kubenswrapper[5110]: E0126 00:11:37.660773 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:38.160757454 +0000 UTC m=+155.389656063 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:37 crc kubenswrapper[5110]: I0126 00:11:37.761940 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:37 crc kubenswrapper[5110]: E0126 00:11:37.762441 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:38.262409463 +0000 UTC m=+155.491308082 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:37 crc kubenswrapper[5110]: I0126 00:11:37.863733 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:37 crc kubenswrapper[5110]: E0126 00:11:37.864093 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:38.364080263 +0000 UTC m=+155.592978872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:37 crc kubenswrapper[5110]: I0126 00:11:37.983662 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-8ssgr" Jan 26 00:11:37 crc kubenswrapper[5110]: I0126 00:11:37.988719 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:37 crc kubenswrapper[5110]: E0126 00:11:37.988983 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:38.488958774 +0000 UTC m=+155.717857383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:37 crc kubenswrapper[5110]: I0126 00:11:37.989334 5110 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-7fnwf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:5443/healthz\": context deadline exceeded" start-of-body= Jan 26 00:11:37 crc kubenswrapper[5110]: I0126 00:11:37.989387 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-7fnwf" podUID="16e64d17-7476-4caf-af16-73d93e2c1085" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.36:5443/healthz\": context deadline exceeded" Jan 26 00:11:38 crc kubenswrapper[5110]: I0126 00:11:38.004971 5110 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-rwwx4 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 00:11:38 crc kubenswrapper[5110]: I0126 00:11:38.005040 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rwwx4" podUID="e2948eb5-9117-467a-89c9-742dde24b958" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 00:11:38 crc kubenswrapper[5110]: I0126 00:11:38.096837 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:38 crc kubenswrapper[5110]: E0126 00:11:38.097551 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:38.597528923 +0000 UTC m=+155.826427532 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:38 crc kubenswrapper[5110]: I0126 00:11:38.124586 5110 ???:1] "http: TLS handshake error from 192.168.126.11:45212: no serving certificate available for the kubelet" Jan 26 00:11:38 crc kubenswrapper[5110]: I0126 00:11:38.182819 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" Jan 26 00:11:38 crc kubenswrapper[5110]: I0126 00:11:38.182902 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" Jan 26 00:11:38 crc kubenswrapper[5110]: I0126 00:11:38.214063 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:38 crc kubenswrapper[5110]: E0126 00:11:38.225721 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:38.725680808 +0000 UTC m=+155.954579427 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:38 crc kubenswrapper[5110]: I0126 00:11:38.317571 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"ab03106a704a057296702100a29b9764a2f1d57ec050c4b40e936104d967a929"} Jan 26 00:11:38 crc kubenswrapper[5110]: I0126 00:11:38.328513 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n929c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:11:38 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 26 00:11:38 crc kubenswrapper[5110]: [+]process-running ok Jan 26 00:11:38 crc kubenswrapper[5110]: healthz check failed Jan 26 00:11:38 crc kubenswrapper[5110]: I0126 00:11:38.328588 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n929c" podUID="49efa7f2-d990-4552-80cf-5d4a72a32ec7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:11:38 crc kubenswrapper[5110]: I0126 00:11:38.328969 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:38 crc kubenswrapper[5110]: E0126 00:11:38.330946 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:38.830930701 +0000 UTC m=+156.059829310 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:38 crc kubenswrapper[5110]: I0126 00:11:38.333636 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-hsvjl" Jan 26 00:11:38 crc kubenswrapper[5110]: I0126 00:11:38.349976 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-8ndzr" event={"ID":"040d3d5f-c02a-4a70-92af-70700fd9e3c3","Type":"ContainerStarted","Data":"93f5aa88961c746a9bcf3a35c35ab652a5913374e1b8bb6db1b97572acc2af59"} Jan 26 00:11:38 crc kubenswrapper[5110]: I0126 00:11:38.367833 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"3a6b6d4e2f4fc795a90471b5e4f9b4847450153a2575f063d17dca06d08d2ced"} Jan 26 00:11:38 crc kubenswrapper[5110]: I0126 00:11:38.368343 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-hsvjl" Jan 26 00:11:38 crc kubenswrapper[5110]: I0126 00:11:38.368447 5110 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-99xsn container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.44:8443/healthz\": dial tcp 10.217.0.44:8443: connect: connection refused" start-of-body= Jan 26 00:11:38 crc kubenswrapper[5110]: I0126 00:11:38.368531 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-99xsn" podUID="ce4b4a91-4053-4fb9-be7e-ac51e71a829e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.44:8443/healthz\": dial tcp 10.217.0.44:8443: connect: connection refused" Jan 26 00:11:38 crc kubenswrapper[5110]: I0126 00:11:38.368555 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:38 crc kubenswrapper[5110]: I0126 00:11:38.369109 5110 patch_prober.go:28] interesting pod/downloads-747b44746d-77449 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 26 00:11:38 crc kubenswrapper[5110]: I0126 00:11:38.369175 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-77449" podUID="2438617c-3b33-4b33-971c-afaab481cfe6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 26 00:11:38 crc kubenswrapper[5110]: I0126 00:11:38.418563 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-pcrzq" Jan 26 00:11:38 crc kubenswrapper[5110]: I0126 00:11:38.440934 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:38 crc kubenswrapper[5110]: E0126 00:11:38.442888 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:38.942854337 +0000 UTC m=+156.171752946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:38 crc kubenswrapper[5110]: I0126 00:11:38.466499 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-979dq" Jan 26 00:11:38 crc kubenswrapper[5110]: I0126 00:11:38.552315 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:38 crc kubenswrapper[5110]: E0126 00:11:38.552678 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:39.052664412 +0000 UTC m=+156.281563011 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:38 crc kubenswrapper[5110]: I0126 00:11:38.653385 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:38 crc kubenswrapper[5110]: E0126 00:11:38.656636 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:39.156605807 +0000 UTC m=+156.385504416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:38 crc kubenswrapper[5110]: I0126 00:11:38.701935 5110 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-qdwb2 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 26 00:11:38 crc kubenswrapper[5110]: [+]log ok Jan 26 00:11:38 crc kubenswrapper[5110]: [+]etcd ok Jan 26 00:11:38 crc kubenswrapper[5110]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 26 00:11:38 crc kubenswrapper[5110]: [+]poststarthook/generic-apiserver-start-informers ok Jan 26 00:11:38 crc kubenswrapper[5110]: [+]poststarthook/max-in-flight-filter ok Jan 26 00:11:38 crc kubenswrapper[5110]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 26 00:11:38 crc kubenswrapper[5110]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 26 00:11:38 crc kubenswrapper[5110]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 26 00:11:38 crc kubenswrapper[5110]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 26 00:11:38 crc kubenswrapper[5110]: [+]poststarthook/project.openshift.io-projectcache ok Jan 26 00:11:38 crc kubenswrapper[5110]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 26 00:11:38 crc kubenswrapper[5110]: [+]poststarthook/openshift.io-startinformers ok Jan 26 00:11:38 crc kubenswrapper[5110]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 26 00:11:38 crc kubenswrapper[5110]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 26 00:11:38 crc kubenswrapper[5110]: livez check failed Jan 26 00:11:38 crc kubenswrapper[5110]: I0126 00:11:38.702050 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" podUID="0cc5795e-df15-48c8-949a-7254633f27e3" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:11:38 crc kubenswrapper[5110]: I0126 00:11:38.766021 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:38 crc kubenswrapper[5110]: E0126 00:11:38.766515 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:39.266489424 +0000 UTC m=+156.495388033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:38 crc kubenswrapper[5110]: I0126 00:11:38.866608 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:38 crc kubenswrapper[5110]: E0126 00:11:38.866808 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:39.366765924 +0000 UTC m=+156.595664533 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:38 crc kubenswrapper[5110]: I0126 00:11:38.867350 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:38 crc kubenswrapper[5110]: E0126 00:11:38.867842 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:39.367823954 +0000 UTC m=+156.596722563 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:38 crc kubenswrapper[5110]: I0126 00:11:38.957987 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rwwx4" Jan 26 00:11:38 crc kubenswrapper[5110]: I0126 00:11:38.968244 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:38 crc kubenswrapper[5110]: E0126 00:11:38.968539 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:39.468521386 +0000 UTC m=+156.697419995 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:38 crc kubenswrapper[5110]: I0126 00:11:38.968563 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c2w74"] Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.085630 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:39 crc kubenswrapper[5110]: E0126 00:11:39.086354 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:39.586333752 +0000 UTC m=+156.815232361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.145903 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c2w74" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.145735 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c2w74"] Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.153715 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2dlcz"] Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.167034 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.179749 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.179868 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2dlcz"] Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.180097 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2dlcz" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.186775 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:39 crc kubenswrapper[5110]: E0126 00:11:39.187165 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:39.687100164 +0000 UTC m=+156.915998773 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.187322 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.193415 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.197700 5110 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-7fnwf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 26 00:11:39 crc kubenswrapper[5110]: [+]log ok Jan 26 00:11:39 crc kubenswrapper[5110]: [+]poststarthook/generic-apiserver-start-informers ok Jan 26 00:11:39 crc kubenswrapper[5110]: [-]poststarthook/max-in-flight-filter failed: reason withheld Jan 26 00:11:39 crc kubenswrapper[5110]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 26 00:11:39 crc kubenswrapper[5110]: healthz check failed Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.197809 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-7fnwf" podUID="16e64d17-7476-4caf-af16-73d93e2c1085" containerName="packageserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.200588 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ltvhd"] Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.201407 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n929c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:11:39 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 26 00:11:39 crc kubenswrapper[5110]: [+]process-running ok Jan 26 00:11:39 crc kubenswrapper[5110]: healthz check failed Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.201524 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n929c" podUID="49efa7f2-d990-4552-80cf-5d4a72a32ec7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:11:39 crc kubenswrapper[5110]: E0126 00:11:39.205206 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:39.705181967 +0000 UTC m=+156.934080576 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.260053 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ltvhd"] Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.260267 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ltvhd" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.275725 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-8ssgr"] Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.311683 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-rwwx4"] Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.335654 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.335782 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24c8aa2b-e367-4f40-92d2-ff31d5cbce2d-catalog-content\") pod \"community-operators-ltvhd\" (UID: \"24c8aa2b-e367-4f40-92d2-ff31d5cbce2d\") " pod="openshift-marketplace/community-operators-ltvhd" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.335855 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24c8aa2b-e367-4f40-92d2-ff31d5cbce2d-utilities\") pod \"community-operators-ltvhd\" (UID: \"24c8aa2b-e367-4f40-92d2-ff31d5cbce2d\") " pod="openshift-marketplace/community-operators-ltvhd" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.335898 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4276f77a-a21c-47f2-9902-e37c3ab865f5-catalog-content\") pod \"certified-operators-2dlcz\" (UID: \"4276f77a-a21c-47f2-9902-e37c3ab865f5\") " pod="openshift-marketplace/certified-operators-2dlcz" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.335934 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20cad200-6605-4fac-b28d-b84cd2d74a89-utilities\") pod \"community-operators-c2w74\" (UID: \"20cad200-6605-4fac-b28d-b84cd2d74a89\") " pod="openshift-marketplace/community-operators-c2w74" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.335968 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4276f77a-a21c-47f2-9902-e37c3ab865f5-utilities\") pod \"certified-operators-2dlcz\" (UID: \"4276f77a-a21c-47f2-9902-e37c3ab865f5\") " pod="openshift-marketplace/certified-operators-2dlcz" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.336009 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndpgg\" (UniqueName: \"kubernetes.io/projected/4276f77a-a21c-47f2-9902-e37c3ab865f5-kube-api-access-ndpgg\") pod \"certified-operators-2dlcz\" (UID: \"4276f77a-a21c-47f2-9902-e37c3ab865f5\") " pod="openshift-marketplace/certified-operators-2dlcz" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.336041 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzhvw\" (UniqueName: \"kubernetes.io/projected/24c8aa2b-e367-4f40-92d2-ff31d5cbce2d-kube-api-access-tzhvw\") pod \"community-operators-ltvhd\" (UID: \"24c8aa2b-e367-4f40-92d2-ff31d5cbce2d\") " pod="openshift-marketplace/community-operators-ltvhd" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.336082 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfpdt\" (UniqueName: \"kubernetes.io/projected/20cad200-6605-4fac-b28d-b84cd2d74a89-kube-api-access-kfpdt\") pod \"community-operators-c2w74\" (UID: \"20cad200-6605-4fac-b28d-b84cd2d74a89\") " pod="openshift-marketplace/community-operators-c2w74" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.336143 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20cad200-6605-4fac-b28d-b84cd2d74a89-catalog-content\") pod \"community-operators-c2w74\" (UID: \"20cad200-6605-4fac-b28d-b84cd2d74a89\") " pod="openshift-marketplace/community-operators-c2w74" Jan 26 00:11:39 crc kubenswrapper[5110]: E0126 00:11:39.336307 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:39.836275797 +0000 UTC m=+157.065174406 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.364931 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-h4rkf" event={"ID":"3a884209-147f-4786-9a92-5eb1a685dc1b","Type":"ContainerStarted","Data":"3aad720d3f68186afccba4a554d90df1d1df30d3405e0e3f3f7fec9a32c23873"} Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.365307 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-h4rkf" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.377585 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"772c8d38-81bf-4f54-b995-e740d2056ead","Type":"ContainerStarted","Data":"edadc16f949d30cd86fdfb92298a7037e3a4562719832fdb69df8f391344d5aa"} Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.385113 5110 generic.go:358] "Generic (PLEG): container finished" podID="defa6a4f-7d53-4d4c-b0c7-cfb7489db02c" containerID="8c21a3978a4ab96a80b573d145515e89e5dc244a84e0ab99b021ed38e771da48" exitCode=0 Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.386218 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-vd2k2" event={"ID":"defa6a4f-7d53-4d4c-b0c7-cfb7489db02c","Type":"ContainerDied","Data":"8c21a3978a4ab96a80b573d145515e89e5dc244a84e0ab99b021ed38e771da48"} Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.392284 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-8ssgr" podUID="9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb" containerName="controller-manager" containerID="cri-o://a1d2ffe9aa28d184a3128e2530810a13ca2ed60200a125c00979d325cc96f2c3" gracePeriod=30 Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.400006 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-h4rkf" podStartSLOduration=136.399980589 podStartE2EDuration="2m16.399980589s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:39.395001575 +0000 UTC m=+156.623900184" watchObservedRunningTime="2026-01-26 00:11:39.399980589 +0000 UTC m=+156.628879198" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.410683 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-99xsn" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.437747 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24c8aa2b-e367-4f40-92d2-ff31d5cbce2d-utilities\") pod \"community-operators-ltvhd\" (UID: \"24c8aa2b-e367-4f40-92d2-ff31d5cbce2d\") " pod="openshift-marketplace/community-operators-ltvhd" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.437842 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4276f77a-a21c-47f2-9902-e37c3ab865f5-catalog-content\") pod \"certified-operators-2dlcz\" (UID: \"4276f77a-a21c-47f2-9902-e37c3ab865f5\") " pod="openshift-marketplace/certified-operators-2dlcz" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.438011 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20cad200-6605-4fac-b28d-b84cd2d74a89-utilities\") pod \"community-operators-c2w74\" (UID: \"20cad200-6605-4fac-b28d-b84cd2d74a89\") " pod="openshift-marketplace/community-operators-c2w74" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.438142 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4276f77a-a21c-47f2-9902-e37c3ab865f5-utilities\") pod \"certified-operators-2dlcz\" (UID: \"4276f77a-a21c-47f2-9902-e37c3ab865f5\") " pod="openshift-marketplace/certified-operators-2dlcz" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.438269 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ndpgg\" (UniqueName: \"kubernetes.io/projected/4276f77a-a21c-47f2-9902-e37c3ab865f5-kube-api-access-ndpgg\") pod \"certified-operators-2dlcz\" (UID: \"4276f77a-a21c-47f2-9902-e37c3ab865f5\") " pod="openshift-marketplace/certified-operators-2dlcz" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.438309 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tzhvw\" (UniqueName: \"kubernetes.io/projected/24c8aa2b-e367-4f40-92d2-ff31d5cbce2d-kube-api-access-tzhvw\") pod \"community-operators-ltvhd\" (UID: \"24c8aa2b-e367-4f40-92d2-ff31d5cbce2d\") " pod="openshift-marketplace/community-operators-ltvhd" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.438340 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kfpdt\" (UniqueName: \"kubernetes.io/projected/20cad200-6605-4fac-b28d-b84cd2d74a89-kube-api-access-kfpdt\") pod \"community-operators-c2w74\" (UID: \"20cad200-6605-4fac-b28d-b84cd2d74a89\") " pod="openshift-marketplace/community-operators-c2w74" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.438473 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.438596 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20cad200-6605-4fac-b28d-b84cd2d74a89-catalog-content\") pod \"community-operators-c2w74\" (UID: \"20cad200-6605-4fac-b28d-b84cd2d74a89\") " pod="openshift-marketplace/community-operators-c2w74" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.438662 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24c8aa2b-e367-4f40-92d2-ff31d5cbce2d-catalog-content\") pod \"community-operators-ltvhd\" (UID: \"24c8aa2b-e367-4f40-92d2-ff31d5cbce2d\") " pod="openshift-marketplace/community-operators-ltvhd" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.442935 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24c8aa2b-e367-4f40-92d2-ff31d5cbce2d-utilities\") pod \"community-operators-ltvhd\" (UID: \"24c8aa2b-e367-4f40-92d2-ff31d5cbce2d\") " pod="openshift-marketplace/community-operators-ltvhd" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.443723 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4276f77a-a21c-47f2-9902-e37c3ab865f5-catalog-content\") pod \"certified-operators-2dlcz\" (UID: \"4276f77a-a21c-47f2-9902-e37c3ab865f5\") " pod="openshift-marketplace/certified-operators-2dlcz" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.445723 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20cad200-6605-4fac-b28d-b84cd2d74a89-utilities\") pod \"community-operators-c2w74\" (UID: \"20cad200-6605-4fac-b28d-b84cd2d74a89\") " pod="openshift-marketplace/community-operators-c2w74" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.446810 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4276f77a-a21c-47f2-9902-e37c3ab865f5-utilities\") pod \"certified-operators-2dlcz\" (UID: \"4276f77a-a21c-47f2-9902-e37c3ab865f5\") " pod="openshift-marketplace/certified-operators-2dlcz" Jan 26 00:11:39 crc kubenswrapper[5110]: E0126 00:11:39.450984 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:39.950963383 +0000 UTC m=+157.179861992 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.451712 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20cad200-6605-4fac-b28d-b84cd2d74a89-catalog-content\") pod \"community-operators-c2w74\" (UID: \"20cad200-6605-4fac-b28d-b84cd2d74a89\") " pod="openshift-marketplace/community-operators-c2w74" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.452066 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24c8aa2b-e367-4f40-92d2-ff31d5cbce2d-catalog-content\") pod \"community-operators-ltvhd\" (UID: \"24c8aa2b-e367-4f40-92d2-ff31d5cbce2d\") " pod="openshift-marketplace/community-operators-ltvhd" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.461035 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ssztd"] Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.469902 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ssztd" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.478776 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ssztd"] Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.497937 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfpdt\" (UniqueName: \"kubernetes.io/projected/20cad200-6605-4fac-b28d-b84cd2d74a89-kube-api-access-kfpdt\") pod \"community-operators-c2w74\" (UID: \"20cad200-6605-4fac-b28d-b84cd2d74a89\") " pod="openshift-marketplace/community-operators-c2w74" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.513272 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndpgg\" (UniqueName: \"kubernetes.io/projected/4276f77a-a21c-47f2-9902-e37c3ab865f5-kube-api-access-ndpgg\") pod \"certified-operators-2dlcz\" (UID: \"4276f77a-a21c-47f2-9902-e37c3ab865f5\") " pod="openshift-marketplace/certified-operators-2dlcz" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.523023 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzhvw\" (UniqueName: \"kubernetes.io/projected/24c8aa2b-e367-4f40-92d2-ff31d5cbce2d-kube-api-access-tzhvw\") pod \"community-operators-ltvhd\" (UID: \"24c8aa2b-e367-4f40-92d2-ff31d5cbce2d\") " pod="openshift-marketplace/community-operators-ltvhd" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.527286 5110 ???:1] "http: TLS handshake error from 192.168.126.11:38054: no serving certificate available for the kubelet" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.541350 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:39 crc kubenswrapper[5110]: E0126 00:11:39.541593 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:40.041552293 +0000 UTC m=+157.270450902 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.541806 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.542110 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f9f1edc-9805-4975-8e01-a3428d24cc00-utilities\") pod \"certified-operators-ssztd\" (UID: \"4f9f1edc-9805-4975-8e01-a3428d24cc00\") " pod="openshift-marketplace/certified-operators-ssztd" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.542204 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjdlf\" (UniqueName: \"kubernetes.io/projected/4f9f1edc-9805-4975-8e01-a3428d24cc00-kube-api-access-tjdlf\") pod \"certified-operators-ssztd\" (UID: \"4f9f1edc-9805-4975-8e01-a3428d24cc00\") " pod="openshift-marketplace/certified-operators-ssztd" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.542251 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f9f1edc-9805-4975-8e01-a3428d24cc00-catalog-content\") pod \"certified-operators-ssztd\" (UID: \"4f9f1edc-9805-4975-8e01-a3428d24cc00\") " pod="openshift-marketplace/certified-operators-ssztd" Jan 26 00:11:39 crc kubenswrapper[5110]: E0126 00:11:39.542368 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:40.042347456 +0000 UTC m=+157.271246115 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.716678 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.717259 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tjdlf\" (UniqueName: \"kubernetes.io/projected/4f9f1edc-9805-4975-8e01-a3428d24cc00-kube-api-access-tjdlf\") pod \"certified-operators-ssztd\" (UID: \"4f9f1edc-9805-4975-8e01-a3428d24cc00\") " pod="openshift-marketplace/certified-operators-ssztd" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.717292 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f9f1edc-9805-4975-8e01-a3428d24cc00-catalog-content\") pod \"certified-operators-ssztd\" (UID: \"4f9f1edc-9805-4975-8e01-a3428d24cc00\") " pod="openshift-marketplace/certified-operators-ssztd" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.717384 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f9f1edc-9805-4975-8e01-a3428d24cc00-utilities\") pod \"certified-operators-ssztd\" (UID: \"4f9f1edc-9805-4975-8e01-a3428d24cc00\") " pod="openshift-marketplace/certified-operators-ssztd" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.717980 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f9f1edc-9805-4975-8e01-a3428d24cc00-utilities\") pod \"certified-operators-ssztd\" (UID: \"4f9f1edc-9805-4975-8e01-a3428d24cc00\") " pod="openshift-marketplace/certified-operators-ssztd" Jan 26 00:11:39 crc kubenswrapper[5110]: E0126 00:11:39.718066 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:40.218043816 +0000 UTC m=+157.446942425 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.718692 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f9f1edc-9805-4975-8e01-a3428d24cc00-catalog-content\") pod \"certified-operators-ssztd\" (UID: \"4f9f1edc-9805-4975-8e01-a3428d24cc00\") " pod="openshift-marketplace/certified-operators-ssztd" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.818757 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:39 crc kubenswrapper[5110]: E0126 00:11:39.819176 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:40.319161339 +0000 UTC m=+157.548059948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.830732 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjdlf\" (UniqueName: \"kubernetes.io/projected/4f9f1edc-9805-4975-8e01-a3428d24cc00-kube-api-access-tjdlf\") pod \"certified-operators-ssztd\" (UID: \"4f9f1edc-9805-4975-8e01-a3428d24cc00\") " pod="openshift-marketplace/certified-operators-ssztd" Jan 26 00:11:39 crc kubenswrapper[5110]: I0126 00:11:39.920341 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:39 crc kubenswrapper[5110]: E0126 00:11:39.922054 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:40.422011123 +0000 UTC m=+157.650909732 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:40 crc kubenswrapper[5110]: I0126 00:11:40.024059 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:40 crc kubenswrapper[5110]: E0126 00:11:40.024493 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:40.524473245 +0000 UTC m=+157.753371864 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:40 crc kubenswrapper[5110]: I0126 00:11:40.125346 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:40 crc kubenswrapper[5110]: E0126 00:11:40.125553 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:40.625516477 +0000 UTC m=+157.854415096 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:40 crc kubenswrapper[5110]: I0126 00:11:40.126124 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:40 crc kubenswrapper[5110]: E0126 00:11:40.126604 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:40.626595488 +0000 UTC m=+157.855494097 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:40 crc kubenswrapper[5110]: I0126 00:11:40.185001 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n929c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:11:40 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 26 00:11:40 crc kubenswrapper[5110]: [+]process-running ok Jan 26 00:11:40 crc kubenswrapper[5110]: healthz check failed Jan 26 00:11:40 crc kubenswrapper[5110]: I0126 00:11:40.185108 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n929c" podUID="49efa7f2-d990-4552-80cf-5d4a72a32ec7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:11:40 crc kubenswrapper[5110]: I0126 00:11:40.254123 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:40 crc kubenswrapper[5110]: E0126 00:11:40.254410 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:40.754390683 +0000 UTC m=+157.983289292 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:40 crc kubenswrapper[5110]: I0126 00:11:40.356221 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:40 crc kubenswrapper[5110]: E0126 00:11:40.356677 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:40.85666041 +0000 UTC m=+158.085559019 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:40 crc kubenswrapper[5110]: I0126 00:11:40.417341 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-bqdwj" event={"ID":"24efc76d-eb4f-4036-804e-a71705eb1a78","Type":"ContainerStarted","Data":"7c2054fdcac256e57d353fb94452ed8631d3afcd6939004594530b3e32e457b4"} Jan 26 00:11:40 crc kubenswrapper[5110]: I0126 00:11:40.421616 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-l6gvr" event={"ID":"0be01a81-9d33-487a-abf9-0284a7b3f24b","Type":"ContainerStarted","Data":"eac5cac640e59476a89e4ce55349dc73c10deaf60a18f27a8b5b9fc58bfb2db7"} Jan 26 00:11:40 crc kubenswrapper[5110]: I0126 00:11:40.457999 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:40 crc kubenswrapper[5110]: E0126 00:11:40.458939 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:40.958914026 +0000 UTC m=+158.187812635 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:40 crc kubenswrapper[5110]: I0126 00:11:40.460442 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-qknx7" event={"ID":"925904bd-cb57-4356-a6df-e72bd716398b","Type":"ContainerStarted","Data":"a9f672f9aa35b1c82d0f2d1d18df6d66a3d82ea778d5f1efec64c1c77a0b1d63"} Jan 26 00:11:40 crc kubenswrapper[5110]: I0126 00:11:40.475497 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-bjwbh" event={"ID":"4ba4d5a4-755b-4d6e-b250-cd705b244775","Type":"ContainerStarted","Data":"64fc4338a39da7ed8a6a82def166294c46ef7187f1462f5c6f71b68a028d6dc9"} Jan 26 00:11:40 crc kubenswrapper[5110]: I0126 00:11:40.478091 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-bjwbh" Jan 26 00:11:40 crc kubenswrapper[5110]: I0126 00:11:40.480899 5110 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-bjwbh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Jan 26 00:11:40 crc kubenswrapper[5110]: I0126 00:11:40.480973 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-bjwbh" podUID="4ba4d5a4-755b-4d6e-b250-cd705b244775" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Jan 26 00:11:40 crc kubenswrapper[5110]: I0126 00:11:40.483583 5110 generic.go:358] "Generic (PLEG): container finished" podID="9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb" containerID="a1d2ffe9aa28d184a3128e2530810a13ca2ed60200a125c00979d325cc96f2c3" exitCode=0 Jan 26 00:11:40 crc kubenswrapper[5110]: I0126 00:11:40.483674 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-8ssgr" event={"ID":"9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb","Type":"ContainerDied","Data":"a1d2ffe9aa28d184a3128e2530810a13ca2ed60200a125c00979d325cc96f2c3"} Jan 26 00:11:40 crc kubenswrapper[5110]: I0126 00:11:40.487102 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t49fp" event={"ID":"ddb3d5eb-9401-4032-9bcd-c2f798fbaf51","Type":"ContainerStarted","Data":"24bc911b073721e142afbd264eb25d6c28635ba57b5564ba59904562dec334cb"} Jan 26 00:11:40 crc kubenswrapper[5110]: I0126 00:11:40.502927 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-bqdwj" podStartSLOduration=18.502901038 podStartE2EDuration="18.502901038s" podCreationTimestamp="2026-01-26 00:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:40.501753665 +0000 UTC m=+157.730652274" watchObservedRunningTime="2026-01-26 00:11:40.502901038 +0000 UTC m=+157.731799647" Jan 26 00:11:40 crc kubenswrapper[5110]: I0126 00:11:40.506916 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4j4dd" event={"ID":"9cf38d4d-d415-4e81-85c3-bb2c65f7423b","Type":"ContainerStarted","Data":"3e3523eb3b3daafd3435cee71d41aa319c8ba4b263953010e538612c40e84e5d"} Jan 26 00:11:40 crc kubenswrapper[5110]: I0126 00:11:40.508379 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-55st2" event={"ID":"5f875209-3895-465c-9992-37ef91c4dda9","Type":"ContainerStarted","Data":"c6703a796a360191ec5e0e251d99df6f705d657104630453647401a913853d7e"} Jan 26 00:11:40 crc kubenswrapper[5110]: I0126 00:11:40.509735 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-jptld" event={"ID":"10d2b7d6-0a18-4f5c-b3b8-2c6f34e88730","Type":"ContainerStarted","Data":"547ebadb0f325d7ccb18a95c8eca93ef2ef196a02c752c1876cca337c86d3106"} Jan 26 00:11:40 crc kubenswrapper[5110]: I0126 00:11:40.511999 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rwwx4" podUID="e2948eb5-9117-467a-89c9-742dde24b958" containerName="route-controller-manager" containerID="cri-o://63f944f750921c4c775a9e258a403b0feca13b2157d6fcefc5ce5d566614db94" gracePeriod=30 Jan 26 00:11:40 crc kubenswrapper[5110]: I0126 00:11:40.604610 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:40 crc kubenswrapper[5110]: E0126 00:11:40.605259 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:41.105238987 +0000 UTC m=+158.334137596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:40 crc kubenswrapper[5110]: I0126 00:11:40.627535 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t49fp" podStartSLOduration=137.627499761 podStartE2EDuration="2m17.627499761s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:40.626935114 +0000 UTC m=+157.855833723" watchObservedRunningTime="2026-01-26 00:11:40.627499761 +0000 UTC m=+157.856398370" Jan 26 00:11:40 crc kubenswrapper[5110]: I0126 00:11:40.760892 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-l6gvr" podStartSLOduration=137.760865617 podStartE2EDuration="2m17.760865617s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:40.759155657 +0000 UTC m=+157.988054266" watchObservedRunningTime="2026-01-26 00:11:40.760865617 +0000 UTC m=+157.989764236" Jan 26 00:11:40 crc kubenswrapper[5110]: I0126 00:11:40.764272 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:40 crc kubenswrapper[5110]: E0126 00:11:40.767732 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:41.267698994 +0000 UTC m=+158.496597603 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:40 crc kubenswrapper[5110]: I0126 00:11:40.929193 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:40 crc kubenswrapper[5110]: E0126 00:11:40.929614 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:41.429596685 +0000 UTC m=+158.658495294 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:40 crc kubenswrapper[5110]: I0126 00:11:40.988952 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5g6gk"] Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.089660 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:41 crc kubenswrapper[5110]: E0126 00:11:41.090680 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:41.590641731 +0000 UTC m=+158.819540340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.095510 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-bjwbh" podStartSLOduration=138.095480461 podStartE2EDuration="2m18.095480461s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:41.094420891 +0000 UTC m=+158.323319510" watchObservedRunningTime="2026-01-26 00:11:41.095480461 +0000 UTC m=+158.324379080" Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.270986 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:41 crc kubenswrapper[5110]: E0126 00:11:41.271510 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:41.77148931 +0000 UTC m=+159.000387919 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.277059 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n929c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:11:41 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 26 00:11:41 crc kubenswrapper[5110]: [+]process-running ok Jan 26 00:11:41 crc kubenswrapper[5110]: healthz check failed Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.277181 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n929c" podUID="49efa7f2-d990-4552-80cf-5d4a72a32ec7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.317474 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-qknx7" podStartSLOduration=138.317447879 podStartE2EDuration="2m18.317447879s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:41.311036654 +0000 UTC m=+158.539935263" watchObservedRunningTime="2026-01-26 00:11:41.317447879 +0000 UTC m=+158.546346488" Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.348659 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5g6gk" Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.355814 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.359294 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5g6gk"] Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.373838 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:41 crc kubenswrapper[5110]: E0126 00:11:41.373979 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:41.873948213 +0000 UTC m=+159.102846882 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.375005 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkjzj\" (UniqueName: \"kubernetes.io/projected/063f39d3-4de8-4c26-99fc-3148a8738541-kube-api-access-jkjzj\") pod \"redhat-marketplace-5g6gk\" (UID: \"063f39d3-4de8-4c26-99fc-3148a8738541\") " pod="openshift-marketplace/redhat-marketplace-5g6gk" Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.375299 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/063f39d3-4de8-4c26-99fc-3148a8738541-utilities\") pod \"redhat-marketplace-5g6gk\" (UID: \"063f39d3-4de8-4c26-99fc-3148a8738541\") " pod="openshift-marketplace/redhat-marketplace-5g6gk" Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.375373 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.375448 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/063f39d3-4de8-4c26-99fc-3148a8738541-catalog-content\") pod \"redhat-marketplace-5g6gk\" (UID: \"063f39d3-4de8-4c26-99fc-3148a8738541\") " pod="openshift-marketplace/redhat-marketplace-5g6gk" Jan 26 00:11:41 crc kubenswrapper[5110]: E0126 00:11:41.376014 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:41.875995212 +0000 UTC m=+159.104893821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.476891 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.477029 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/063f39d3-4de8-4c26-99fc-3148a8738541-catalog-content\") pod \"redhat-marketplace-5g6gk\" (UID: \"063f39d3-4de8-4c26-99fc-3148a8738541\") " pod="openshift-marketplace/redhat-marketplace-5g6gk" Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.477070 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jkjzj\" (UniqueName: \"kubernetes.io/projected/063f39d3-4de8-4c26-99fc-3148a8738541-kube-api-access-jkjzj\") pod \"redhat-marketplace-5g6gk\" (UID: \"063f39d3-4de8-4c26-99fc-3148a8738541\") " pod="openshift-marketplace/redhat-marketplace-5g6gk" Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.477155 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/063f39d3-4de8-4c26-99fc-3148a8738541-utilities\") pod \"redhat-marketplace-5g6gk\" (UID: \"063f39d3-4de8-4c26-99fc-3148a8738541\") " pod="openshift-marketplace/redhat-marketplace-5g6gk" Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.477861 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/063f39d3-4de8-4c26-99fc-3148a8738541-utilities\") pod \"redhat-marketplace-5g6gk\" (UID: \"063f39d3-4de8-4c26-99fc-3148a8738541\") " pod="openshift-marketplace/redhat-marketplace-5g6gk" Jan 26 00:11:41 crc kubenswrapper[5110]: E0126 00:11:41.477956 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:41.977934029 +0000 UTC m=+159.206832648 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.478226 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/063f39d3-4de8-4c26-99fc-3148a8738541-catalog-content\") pod \"redhat-marketplace-5g6gk\" (UID: \"063f39d3-4de8-4c26-99fc-3148a8738541\") " pod="openshift-marketplace/redhat-marketplace-5g6gk" Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.491742 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k2xlk"] Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.537188 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkjzj\" (UniqueName: \"kubernetes.io/projected/063f39d3-4de8-4c26-99fc-3148a8738541-kube-api-access-jkjzj\") pod \"redhat-marketplace-5g6gk\" (UID: \"063f39d3-4de8-4c26-99fc-3148a8738541\") " pod="openshift-marketplace/redhat-marketplace-5g6gk" Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.537540 5110 generic.go:358] "Generic (PLEG): container finished" podID="e2948eb5-9117-467a-89c9-742dde24b958" containerID="63f944f750921c4c775a9e258a403b0feca13b2157d6fcefc5ce5d566614db94" exitCode=0 Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.557817 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rwwx4" event={"ID":"e2948eb5-9117-467a-89c9-742dde24b958","Type":"ContainerDied","Data":"63f944f750921c4c775a9e258a403b0feca13b2157d6fcefc5ce5d566614db94"} Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.557904 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k2xlk"] Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.557933 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-c5l6l" event={"ID":"6b4c71ca-e2f6-48d9-832a-1eca4fbcd9f3","Type":"ContainerStarted","Data":"752a9f1f9fad565f9493f0548d7ea5b03ce1f8e7ad03d367d252f41f90f81a45"} Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.558097 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k2xlk" Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.567608 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-8ssgr" event={"ID":"9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb","Type":"ContainerDied","Data":"d5c7431865f8c6d06a8fa85405b20def9940d3d3b870449882766b19a541cb0c"} Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.567682 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5c7431865f8c6d06a8fa85405b20def9940d3d3b870449882766b19a541cb0c" Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.581116 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"4fc984224a5e0c2c4de83c7cb881ae3dd3dfccd9f14876b6cbc8140becad5f10"} Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.581275 5110 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-bjwbh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.581314 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-bjwbh" podUID="4ba4d5a4-755b-4d6e-b250-cd705b244775" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.583945 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m6xq\" (UniqueName: \"kubernetes.io/projected/10001c1a-643a-49dd-b5ba-88376896f044-kube-api-access-2m6xq\") pod \"redhat-marketplace-k2xlk\" (UID: \"10001c1a-643a-49dd-b5ba-88376896f044\") " pod="openshift-marketplace/redhat-marketplace-k2xlk" Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.583998 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10001c1a-643a-49dd-b5ba-88376896f044-utilities\") pod \"redhat-marketplace-k2xlk\" (UID: \"10001c1a-643a-49dd-b5ba-88376896f044\") " pod="openshift-marketplace/redhat-marketplace-k2xlk" Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.584024 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10001c1a-643a-49dd-b5ba-88376896f044-catalog-content\") pod \"redhat-marketplace-k2xlk\" (UID: \"10001c1a-643a-49dd-b5ba-88376896f044\") " pod="openshift-marketplace/redhat-marketplace-k2xlk" Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.584050 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:41 crc kubenswrapper[5110]: E0126 00:11:41.585812 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:42.085778507 +0000 UTC m=+159.314677116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.684933 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.685619 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2m6xq\" (UniqueName: \"kubernetes.io/projected/10001c1a-643a-49dd-b5ba-88376896f044-kube-api-access-2m6xq\") pod \"redhat-marketplace-k2xlk\" (UID: \"10001c1a-643a-49dd-b5ba-88376896f044\") " pod="openshift-marketplace/redhat-marketplace-k2xlk" Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.685651 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10001c1a-643a-49dd-b5ba-88376896f044-utilities\") pod \"redhat-marketplace-k2xlk\" (UID: \"10001c1a-643a-49dd-b5ba-88376896f044\") " pod="openshift-marketplace/redhat-marketplace-k2xlk" Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.685672 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10001c1a-643a-49dd-b5ba-88376896f044-catalog-content\") pod \"redhat-marketplace-k2xlk\" (UID: \"10001c1a-643a-49dd-b5ba-88376896f044\") " pod="openshift-marketplace/redhat-marketplace-k2xlk" Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.686251 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10001c1a-643a-49dd-b5ba-88376896f044-catalog-content\") pod \"redhat-marketplace-k2xlk\" (UID: \"10001c1a-643a-49dd-b5ba-88376896f044\") " pod="openshift-marketplace/redhat-marketplace-k2xlk" Jan 26 00:11:41 crc kubenswrapper[5110]: E0126 00:11:41.686317 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:42.186299843 +0000 UTC m=+159.415198452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.687228 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10001c1a-643a-49dd-b5ba-88376896f044-utilities\") pod \"redhat-marketplace-k2xlk\" (UID: \"10001c1a-643a-49dd-b5ba-88376896f044\") " pod="openshift-marketplace/redhat-marketplace-k2xlk" Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.713850 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m6xq\" (UniqueName: \"kubernetes.io/projected/10001c1a-643a-49dd-b5ba-88376896f044-kube-api-access-2m6xq\") pod \"redhat-marketplace-k2xlk\" (UID: \"10001c1a-643a-49dd-b5ba-88376896f044\") " pod="openshift-marketplace/redhat-marketplace-k2xlk" Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.716398 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-c5l6l" podStartSLOduration=138.716376883 podStartE2EDuration="2m18.716376883s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:41.715121897 +0000 UTC m=+158.944020506" watchObservedRunningTime="2026-01-26 00:11:41.716376883 +0000 UTC m=+158.945275492" Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.788097 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:41 crc kubenswrapper[5110]: E0126 00:11:41.788787 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:42.288755456 +0000 UTC m=+159.517654095 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.892277 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:41 crc kubenswrapper[5110]: E0126 00:11:41.892522 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:42.392475885 +0000 UTC m=+159.621374494 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:41 crc kubenswrapper[5110]: I0126 00:11:41.892874 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:41 crc kubenswrapper[5110]: E0126 00:11:41.893449 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:42.393441703 +0000 UTC m=+159.622340312 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:41.993916 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:42 crc kubenswrapper[5110]: E0126 00:11:41.994281 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:42.494221006 +0000 UTC m=+159.723119615 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:41.994982 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:42 crc kubenswrapper[5110]: E0126 00:11:41.995727 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:42.495686219 +0000 UTC m=+159.724584828 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.027265 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d6ths"] Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.115104 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.115425 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d6ths"] Jan 26 00:11:42 crc kubenswrapper[5110]: E0126 00:11:42.115532 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:42.615499913 +0000 UTC m=+159.844398522 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.115642 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d6ths" Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.117821 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.141592 5110 ???:1] "http: TLS handshake error from 192.168.126.11:38056: no serving certificate available for the kubelet" Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.179677 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n929c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:11:42 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 26 00:11:42 crc kubenswrapper[5110]: [+]process-running ok Jan 26 00:11:42 crc kubenswrapper[5110]: healthz check failed Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.179751 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n929c" podUID="49efa7f2-d990-4552-80cf-5d4a72a32ec7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.243825 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.243897 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bj46\" (UniqueName: \"kubernetes.io/projected/168e3c2a-eb5e-451a-bba9-93e41fb1e958-kube-api-access-6bj46\") pod \"redhat-operators-d6ths\" (UID: \"168e3c2a-eb5e-451a-bba9-93e41fb1e958\") " pod="openshift-marketplace/redhat-operators-d6ths" Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.243965 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/168e3c2a-eb5e-451a-bba9-93e41fb1e958-catalog-content\") pod \"redhat-operators-d6ths\" (UID: \"168e3c2a-eb5e-451a-bba9-93e41fb1e958\") " pod="openshift-marketplace/redhat-operators-d6ths" Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.244167 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/168e3c2a-eb5e-451a-bba9-93e41fb1e958-utilities\") pod \"redhat-operators-d6ths\" (UID: \"168e3c2a-eb5e-451a-bba9-93e41fb1e958\") " pod="openshift-marketplace/redhat-operators-d6ths" Jan 26 00:11:42 crc kubenswrapper[5110]: E0126 00:11:42.244418 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:42.74439431 +0000 UTC m=+159.973292929 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.398128 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:42 crc kubenswrapper[5110]: E0126 00:11:42.398379 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:42.898338311 +0000 UTC m=+160.127236920 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.398560 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/168e3c2a-eb5e-451a-bba9-93e41fb1e958-catalog-content\") pod \"redhat-operators-d6ths\" (UID: \"168e3c2a-eb5e-451a-bba9-93e41fb1e958\") " pod="openshift-marketplace/redhat-operators-d6ths" Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.398831 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/168e3c2a-eb5e-451a-bba9-93e41fb1e958-utilities\") pod \"redhat-operators-d6ths\" (UID: \"168e3c2a-eb5e-451a-bba9-93e41fb1e958\") " pod="openshift-marketplace/redhat-operators-d6ths" Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.398976 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.399054 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6bj46\" (UniqueName: \"kubernetes.io/projected/168e3c2a-eb5e-451a-bba9-93e41fb1e958-kube-api-access-6bj46\") pod \"redhat-operators-d6ths\" (UID: \"168e3c2a-eb5e-451a-bba9-93e41fb1e958\") " pod="openshift-marketplace/redhat-operators-d6ths" Jan 26 00:11:42 crc kubenswrapper[5110]: E0126 00:11:42.399418 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:42.899395201 +0000 UTC m=+160.128293800 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.399920 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/168e3c2a-eb5e-451a-bba9-93e41fb1e958-catalog-content\") pod \"redhat-operators-d6ths\" (UID: \"168e3c2a-eb5e-451a-bba9-93e41fb1e958\") " pod="openshift-marketplace/redhat-operators-d6ths" Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.400082 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/168e3c2a-eb5e-451a-bba9-93e41fb1e958-utilities\") pod \"redhat-operators-d6ths\" (UID: \"168e3c2a-eb5e-451a-bba9-93e41fb1e958\") " pod="openshift-marketplace/redhat-operators-d6ths" Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.400986 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wsblt"] Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.427997 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bj46\" (UniqueName: \"kubernetes.io/projected/168e3c2a-eb5e-451a-bba9-93e41fb1e958-kube-api-access-6bj46\") pod \"redhat-operators-d6ths\" (UID: \"168e3c2a-eb5e-451a-bba9-93e41fb1e958\") " pod="openshift-marketplace/redhat-operators-d6ths" Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.428629 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c2w74" Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.442908 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wsblt" Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.475572 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wsblt"] Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.504689 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2dlcz" Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.595535 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:42 crc kubenswrapper[5110]: E0126 00:11:42.595686 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:43.095653606 +0000 UTC m=+160.324552215 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.595818 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96073c20-1f42-44fa-a922-6681dab132ef-utilities\") pod \"redhat-operators-wsblt\" (UID: \"96073c20-1f42-44fa-a922-6681dab132ef\") " pod="openshift-marketplace/redhat-operators-wsblt" Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.595952 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.596041 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfhr4\" (UniqueName: \"kubernetes.io/projected/96073c20-1f42-44fa-a922-6681dab132ef-kube-api-access-cfhr4\") pod \"redhat-operators-wsblt\" (UID: \"96073c20-1f42-44fa-a922-6681dab132ef\") " pod="openshift-marketplace/redhat-operators-wsblt" Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.596118 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96073c20-1f42-44fa-a922-6681dab132ef-catalog-content\") pod \"redhat-operators-wsblt\" (UID: \"96073c20-1f42-44fa-a922-6681dab132ef\") " pod="openshift-marketplace/redhat-operators-wsblt" Jan 26 00:11:42 crc kubenswrapper[5110]: E0126 00:11:42.596843 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:43.096826119 +0000 UTC m=+160.325724728 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.697394 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:42 crc kubenswrapper[5110]: E0126 00:11:42.697629 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:43.197570972 +0000 UTC m=+160.426469571 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.697869 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cfhr4\" (UniqueName: \"kubernetes.io/projected/96073c20-1f42-44fa-a922-6681dab132ef-kube-api-access-cfhr4\") pod \"redhat-operators-wsblt\" (UID: \"96073c20-1f42-44fa-a922-6681dab132ef\") " pod="openshift-marketplace/redhat-operators-wsblt" Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.698150 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96073c20-1f42-44fa-a922-6681dab132ef-catalog-content\") pod \"redhat-operators-wsblt\" (UID: \"96073c20-1f42-44fa-a922-6681dab132ef\") " pod="openshift-marketplace/redhat-operators-wsblt" Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.698350 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96073c20-1f42-44fa-a922-6681dab132ef-utilities\") pod \"redhat-operators-wsblt\" (UID: \"96073c20-1f42-44fa-a922-6681dab132ef\") " pod="openshift-marketplace/redhat-operators-wsblt" Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.699021 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96073c20-1f42-44fa-a922-6681dab132ef-catalog-content\") pod \"redhat-operators-wsblt\" (UID: \"96073c20-1f42-44fa-a922-6681dab132ef\") " pod="openshift-marketplace/redhat-operators-wsblt" Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.699030 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96073c20-1f42-44fa-a922-6681dab132ef-utilities\") pod \"redhat-operators-wsblt\" (UID: \"96073c20-1f42-44fa-a922-6681dab132ef\") " pod="openshift-marketplace/redhat-operators-wsblt" Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.726119 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ltvhd" Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.729028 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfhr4\" (UniqueName: \"kubernetes.io/projected/96073c20-1f42-44fa-a922-6681dab132ef-kube-api-access-cfhr4\") pod \"redhat-operators-wsblt\" (UID: \"96073c20-1f42-44fa-a922-6681dab132ef\") " pod="openshift-marketplace/redhat-operators-wsblt" Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.795252 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ssztd" Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.799321 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:42 crc kubenswrapper[5110]: E0126 00:11:42.816229 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:43.316208401 +0000 UTC m=+160.545107010 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.819084 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5g6gk" Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.842587 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-vd2k2" event={"ID":"defa6a4f-7d53-4d4c-b0c7-cfb7489db02c","Type":"ContainerDied","Data":"a245e20966aae520bf81600acdc7f29e989573d9f8ac297e37d591d0078c3fc2"} Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.842639 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a245e20966aae520bf81600acdc7f29e989573d9f8ac297e37d591d0078c3fc2" Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.843165 5110 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-bjwbh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.843348 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-bjwbh" podUID="4ba4d5a4-755b-4d6e-b250-cd705b244775" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.843454 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-8ssgr" Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.900807 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.900903 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb-tmp\") pod \"9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb\" (UID: \"9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb\") " Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.900924 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grskb\" (UniqueName: \"kubernetes.io/projected/9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb-kube-api-access-grskb\") pod \"9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb\" (UID: \"9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb\") " Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.900956 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb-proxy-ca-bundles\") pod \"9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb\" (UID: \"9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb\") " Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.900982 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb-config\") pod \"9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb\" (UID: \"9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb\") " Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.901007 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb-serving-cert\") pod \"9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb\" (UID: \"9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb\") " Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.901041 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb-client-ca\") pod \"9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb\" (UID: \"9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb\") " Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.902595 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb-client-ca" (OuterVolumeSpecName: "client-ca") pod "9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb" (UID: "9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.904833 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb" (UID: "9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:42 crc kubenswrapper[5110]: E0126 00:11:42.904949 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:43.404923246 +0000 UTC m=+160.633821855 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.905449 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb-config" (OuterVolumeSpecName: "config") pod "9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb" (UID: "9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:42 crc kubenswrapper[5110]: I0126 00:11:42.905870 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb-tmp" (OuterVolumeSpecName: "tmp") pod "9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb" (UID: "9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.019267 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.019364 5110 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.019378 5110 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.019392 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.019400 5110 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:43 crc kubenswrapper[5110]: E0126 00:11:43.019752 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:43.519733296 +0000 UTC m=+160.748631905 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.023973 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb-kube-api-access-grskb" (OuterVolumeSpecName: "kube-api-access-grskb") pod "9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb" (UID: "9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb"). InnerVolumeSpecName "kube-api-access-grskb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.027885 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb" (UID: "9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.036900 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-684d994d47-8c2v7"] Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.071341 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k2xlk" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.078306 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb" containerName="controller-manager" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.078366 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb" containerName="controller-manager" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.078539 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb" containerName="controller-manager" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.093976 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d6ths" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.109022 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wsblt" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.125684 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.126488 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.126508 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grskb\" (UniqueName: \"kubernetes.io/projected/9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb-kube-api-access-grskb\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:43 crc kubenswrapper[5110]: E0126 00:11:43.127034 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:43.627008958 +0000 UTC m=+160.855907557 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.160237 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-vd2k2" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.182378 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n929c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:11:43 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 26 00:11:43 crc kubenswrapper[5110]: [+]process-running ok Jan 26 00:11:43 crc kubenswrapper[5110]: healthz check failed Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.182461 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n929c" podUID="49efa7f2-d990-4552-80cf-5d4a72a32ec7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.194910 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-684d994d47-8c2v7"] Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.195108 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-684d994d47-8c2v7" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.202693 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.225266 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-qdwb2" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.231365 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwjd4\" (UniqueName: \"kubernetes.io/projected/defa6a4f-7d53-4d4c-b0c7-cfb7489db02c-kube-api-access-bwjd4\") pod \"defa6a4f-7d53-4d4c-b0c7-cfb7489db02c\" (UID: \"defa6a4f-7d53-4d4c-b0c7-cfb7489db02c\") " Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.231478 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/defa6a4f-7d53-4d4c-b0c7-cfb7489db02c-config-volume\") pod \"defa6a4f-7d53-4d4c-b0c7-cfb7489db02c\" (UID: \"defa6a4f-7d53-4d4c-b0c7-cfb7489db02c\") " Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.231508 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/defa6a4f-7d53-4d4c-b0c7-cfb7489db02c-secret-volume\") pod \"defa6a4f-7d53-4d4c-b0c7-cfb7489db02c\" (UID: \"defa6a4f-7d53-4d4c-b0c7-cfb7489db02c\") " Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.231849 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:43 crc kubenswrapper[5110]: E0126 00:11:43.232296 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:43.732258261 +0000 UTC m=+160.961156870 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.238935 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/defa6a4f-7d53-4d4c-b0c7-cfb7489db02c-config-volume" (OuterVolumeSpecName: "config-volume") pod "defa6a4f-7d53-4d4c-b0c7-cfb7489db02c" (UID: "defa6a4f-7d53-4d4c-b0c7-cfb7489db02c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.249166 5110 patch_prober.go:28] interesting pod/console-64d44f6ddf-tjq6l container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.249255 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-tjq6l" podUID="766da705-879b-4d57-a34d-43eca4c9da19" containerName="console" probeResult="failure" output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.332987 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.333379 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d38403be-a3f6-43a0-9957-39bc82a8870c-tmp\") pod \"controller-manager-684d994d47-8c2v7\" (UID: \"d38403be-a3f6-43a0-9957-39bc82a8870c\") " pod="openshift-controller-manager/controller-manager-684d994d47-8c2v7" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.333444 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9622\" (UniqueName: \"kubernetes.io/projected/d38403be-a3f6-43a0-9957-39bc82a8870c-kube-api-access-v9622\") pod \"controller-manager-684d994d47-8c2v7\" (UID: \"d38403be-a3f6-43a0-9957-39bc82a8870c\") " pod="openshift-controller-manager/controller-manager-684d994d47-8c2v7" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.333475 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d38403be-a3f6-43a0-9957-39bc82a8870c-serving-cert\") pod \"controller-manager-684d994d47-8c2v7\" (UID: \"d38403be-a3f6-43a0-9957-39bc82a8870c\") " pod="openshift-controller-manager/controller-manager-684d994d47-8c2v7" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.333508 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d38403be-a3f6-43a0-9957-39bc82a8870c-config\") pod \"controller-manager-684d994d47-8c2v7\" (UID: \"d38403be-a3f6-43a0-9957-39bc82a8870c\") " pod="openshift-controller-manager/controller-manager-684d994d47-8c2v7" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.333535 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d38403be-a3f6-43a0-9957-39bc82a8870c-proxy-ca-bundles\") pod \"controller-manager-684d994d47-8c2v7\" (UID: \"d38403be-a3f6-43a0-9957-39bc82a8870c\") " pod="openshift-controller-manager/controller-manager-684d994d47-8c2v7" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.333636 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d38403be-a3f6-43a0-9957-39bc82a8870c-client-ca\") pod \"controller-manager-684d994d47-8c2v7\" (UID: \"d38403be-a3f6-43a0-9957-39bc82a8870c\") " pod="openshift-controller-manager/controller-manager-684d994d47-8c2v7" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.333749 5110 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/defa6a4f-7d53-4d4c-b0c7-cfb7489db02c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:43 crc kubenswrapper[5110]: E0126 00:11:43.334907 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:43.834884328 +0000 UTC m=+161.063782937 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.435861 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v9622\" (UniqueName: \"kubernetes.io/projected/d38403be-a3f6-43a0-9957-39bc82a8870c-kube-api-access-v9622\") pod \"controller-manager-684d994d47-8c2v7\" (UID: \"d38403be-a3f6-43a0-9957-39bc82a8870c\") " pod="openshift-controller-manager/controller-manager-684d994d47-8c2v7" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.435918 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d38403be-a3f6-43a0-9957-39bc82a8870c-serving-cert\") pod \"controller-manager-684d994d47-8c2v7\" (UID: \"d38403be-a3f6-43a0-9957-39bc82a8870c\") " pod="openshift-controller-manager/controller-manager-684d994d47-8c2v7" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.435946 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d38403be-a3f6-43a0-9957-39bc82a8870c-config\") pod \"controller-manager-684d994d47-8c2v7\" (UID: \"d38403be-a3f6-43a0-9957-39bc82a8870c\") " pod="openshift-controller-manager/controller-manager-684d994d47-8c2v7" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.435969 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d38403be-a3f6-43a0-9957-39bc82a8870c-proxy-ca-bundles\") pod \"controller-manager-684d994d47-8c2v7\" (UID: \"d38403be-a3f6-43a0-9957-39bc82a8870c\") " pod="openshift-controller-manager/controller-manager-684d994d47-8c2v7" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.435995 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.436031 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d38403be-a3f6-43a0-9957-39bc82a8870c-client-ca\") pod \"controller-manager-684d994d47-8c2v7\" (UID: \"d38403be-a3f6-43a0-9957-39bc82a8870c\") " pod="openshift-controller-manager/controller-manager-684d994d47-8c2v7" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.436092 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d38403be-a3f6-43a0-9957-39bc82a8870c-tmp\") pod \"controller-manager-684d994d47-8c2v7\" (UID: \"d38403be-a3f6-43a0-9957-39bc82a8870c\") " pod="openshift-controller-manager/controller-manager-684d994d47-8c2v7" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.436699 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d38403be-a3f6-43a0-9957-39bc82a8870c-tmp\") pod \"controller-manager-684d994d47-8c2v7\" (UID: \"d38403be-a3f6-43a0-9957-39bc82a8870c\") " pod="openshift-controller-manager/controller-manager-684d994d47-8c2v7" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.444454 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d38403be-a3f6-43a0-9957-39bc82a8870c-proxy-ca-bundles\") pod \"controller-manager-684d994d47-8c2v7\" (UID: \"d38403be-a3f6-43a0-9957-39bc82a8870c\") " pod="openshift-controller-manager/controller-manager-684d994d47-8c2v7" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.445473 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d38403be-a3f6-43a0-9957-39bc82a8870c-config\") pod \"controller-manager-684d994d47-8c2v7\" (UID: \"d38403be-a3f6-43a0-9957-39bc82a8870c\") " pod="openshift-controller-manager/controller-manager-684d994d47-8c2v7" Jan 26 00:11:43 crc kubenswrapper[5110]: E0126 00:11:43.445797 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:43.945775454 +0000 UTC m=+161.174674063 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.446939 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d38403be-a3f6-43a0-9957-39bc82a8870c-client-ca\") pod \"controller-manager-684d994d47-8c2v7\" (UID: \"d38403be-a3f6-43a0-9957-39bc82a8870c\") " pod="openshift-controller-manager/controller-manager-684d994d47-8c2v7" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.507533 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/defa6a4f-7d53-4d4c-b0c7-cfb7489db02c-kube-api-access-bwjd4" (OuterVolumeSpecName: "kube-api-access-bwjd4") pod "defa6a4f-7d53-4d4c-b0c7-cfb7489db02c" (UID: "defa6a4f-7d53-4d4c-b0c7-cfb7489db02c"). InnerVolumeSpecName "kube-api-access-bwjd4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.544784 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.545286 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bwjd4\" (UniqueName: \"kubernetes.io/projected/defa6a4f-7d53-4d4c-b0c7-cfb7489db02c-kube-api-access-bwjd4\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:43 crc kubenswrapper[5110]: E0126 00:11:43.545364 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:44.045342683 +0000 UTC m=+161.274241292 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.635196 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d38403be-a3f6-43a0-9957-39bc82a8870c-serving-cert\") pod \"controller-manager-684d994d47-8c2v7\" (UID: \"d38403be-a3f6-43a0-9957-39bc82a8870c\") " pod="openshift-controller-manager/controller-manager-684d994d47-8c2v7" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.646158 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:43 crc kubenswrapper[5110]: E0126 00:11:43.646561 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:44.146544519 +0000 UTC m=+161.375443128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.673943 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/defa6a4f-7d53-4d4c-b0c7-cfb7489db02c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "defa6a4f-7d53-4d4c-b0c7-cfb7489db02c" (UID: "defa6a4f-7d53-4d4c-b0c7-cfb7489db02c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.694827 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9622\" (UniqueName: \"kubernetes.io/projected/d38403be-a3f6-43a0-9957-39bc82a8870c-kube-api-access-v9622\") pod \"controller-manager-684d994d47-8c2v7\" (UID: \"d38403be-a3f6-43a0-9957-39bc82a8870c\") " pod="openshift-controller-manager/controller-manager-684d994d47-8c2v7" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.754681 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.755076 5110 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/defa6a4f-7d53-4d4c-b0c7-cfb7489db02c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:43 crc kubenswrapper[5110]: E0126 00:11:43.755181 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:44.255148879 +0000 UTC m=+161.484047488 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.859749 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:43 crc kubenswrapper[5110]: E0126 00:11:43.860145 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:44.360122734 +0000 UTC m=+161.589021343 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.878831 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-684d994d47-8c2v7" Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.966599 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:43 crc kubenswrapper[5110]: E0126 00:11:43.967181 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:44.467130308 +0000 UTC m=+161.696028927 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:43 crc kubenswrapper[5110]: I0126 00:11:43.968127 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:43 crc kubenswrapper[5110]: E0126 00:11:43.970062 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:44.470037262 +0000 UTC m=+161.698935871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.019663 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rwwx4" Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.178163 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-vl2cl" event={"ID":"3451ac5b-8ad7-4419-b262-ec54012b9dc6","Type":"ContainerStarted","Data":"c28602040bdf97b737c0ea33b117ca9888d5c96cbbc830fa0adc741dad343621"} Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.190489 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2948eb5-9117-467a-89c9-742dde24b958-config\") pod \"e2948eb5-9117-467a-89c9-742dde24b958\" (UID: \"e2948eb5-9117-467a-89c9-742dde24b958\") " Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.190586 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e2948eb5-9117-467a-89c9-742dde24b958-client-ca\") pod \"e2948eb5-9117-467a-89c9-742dde24b958\" (UID: \"e2948eb5-9117-467a-89c9-742dde24b958\") " Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.190627 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clsvv\" (UniqueName: \"kubernetes.io/projected/e2948eb5-9117-467a-89c9-742dde24b958-kube-api-access-clsvv\") pod \"e2948eb5-9117-467a-89c9-742dde24b958\" (UID: \"e2948eb5-9117-467a-89c9-742dde24b958\") " Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.190668 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e2948eb5-9117-467a-89c9-742dde24b958-tmp\") pod \"e2948eb5-9117-467a-89c9-742dde24b958\" (UID: \"e2948eb5-9117-467a-89c9-742dde24b958\") " Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.190825 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.190861 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2948eb5-9117-467a-89c9-742dde24b958-serving-cert\") pod \"e2948eb5-9117-467a-89c9-742dde24b958\" (UID: \"e2948eb5-9117-467a-89c9-742dde24b958\") " Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.192107 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2948eb5-9117-467a-89c9-742dde24b958-tmp" (OuterVolumeSpecName: "tmp") pod "e2948eb5-9117-467a-89c9-742dde24b958" (UID: "e2948eb5-9117-467a-89c9-742dde24b958"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.192455 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2948eb5-9117-467a-89c9-742dde24b958-config" (OuterVolumeSpecName: "config") pod "e2948eb5-9117-467a-89c9-742dde24b958" (UID: "e2948eb5-9117-467a-89c9-742dde24b958"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:44 crc kubenswrapper[5110]: E0126 00:11:44.192518 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:44.692483753 +0000 UTC m=+161.921382402 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.192591 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2948eb5-9117-467a-89c9-742dde24b958-client-ca" (OuterVolumeSpecName: "client-ca") pod "e2948eb5-9117-467a-89c9-742dde24b958" (UID: "e2948eb5-9117-467a-89c9-742dde24b958"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.294002 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.294586 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2948eb5-9117-467a-89c9-742dde24b958-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.294602 5110 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e2948eb5-9117-467a-89c9-742dde24b958-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.294613 5110 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e2948eb5-9117-467a-89c9-742dde24b958-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:44 crc kubenswrapper[5110]: E0126 00:11:44.294990 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:44.794974787 +0000 UTC m=+162.023873396 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.303853 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n929c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:11:44 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 26 00:11:44 crc kubenswrapper[5110]: [+]process-running ok Jan 26 00:11:44 crc kubenswrapper[5110]: healthz check failed Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.303942 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n929c" podUID="49efa7f2-d990-4552-80cf-5d4a72a32ec7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:11:44 crc kubenswrapper[5110]: E0126 00:11:44.304338 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b142be6234ffd8caeb6021967a30cb598049539e85a0c7451710516ae5ff031a" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.304730 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-jptld" event={"ID":"10d2b7d6-0a18-4f5c-b3b8-2c6f34e88730","Type":"ContainerStarted","Data":"93ead125a4ae7b6192a1a7440a91dcf1321a6914ecca22474ed1fdfb9e051da5"} Jan 26 00:11:44 crc kubenswrapper[5110]: E0126 00:11:44.333021 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b142be6234ffd8caeb6021967a30cb598049539e85a0c7451710516ae5ff031a" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:11:44 crc kubenswrapper[5110]: E0126 00:11:44.335724 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b142be6234ffd8caeb6021967a30cb598049539e85a0c7451710516ae5ff031a" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:11:44 crc kubenswrapper[5110]: E0126 00:11:44.335818 5110 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-hvgtl" podUID="8c8a6437-b0cb-4825-999f-6e523fd394e9" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.338306 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pzlvv" event={"ID":"4464550b-e8e5-4f07-8e54-40eb792eb201","Type":"ContainerStarted","Data":"81eb9eb395ec02786951da7ea4bd29453eb107e8edf6fcaf2331c59351f43525"} Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.360945 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-8ndzr" event={"ID":"040d3d5f-c02a-4a70-92af-70700fd9e3c3","Type":"ContainerStarted","Data":"b79f33258aa84e28e9ac36e26cc800799139d0d5449611263442478367261d30"} Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.368210 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2948eb5-9117-467a-89c9-742dde24b958-kube-api-access-clsvv" (OuterVolumeSpecName: "kube-api-access-clsvv") pod "e2948eb5-9117-467a-89c9-742dde24b958" (UID: "e2948eb5-9117-467a-89c9-742dde24b958"). InnerVolumeSpecName "kube-api-access-clsvv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.368378 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2948eb5-9117-467a-89c9-742dde24b958-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e2948eb5-9117-467a-89c9-742dde24b958" (UID: "e2948eb5-9117-467a-89c9-742dde24b958"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.376388 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rwwx4" event={"ID":"e2948eb5-9117-467a-89c9-742dde24b958","Type":"ContainerDied","Data":"12893f4213c5e6b3cd62e5343e75bee6615e0a433cae719ada046543045c1b48"} Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.376446 5110 scope.go:117] "RemoveContainer" containerID="63f944f750921c4c775a9e258a403b0feca13b2157d6fcefc5ce5d566614db94" Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.377784 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rwwx4" Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.397719 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.398018 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2948eb5-9117-467a-89c9-742dde24b958-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.398038 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-clsvv\" (UniqueName: \"kubernetes.io/projected/e2948eb5-9117-467a-89c9-742dde24b958-kube-api-access-clsvv\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:44 crc kubenswrapper[5110]: E0126 00:11:44.398110 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:44.898091038 +0000 UTC m=+162.126989647 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.481278 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-g57zz" event={"ID":"d7cb0851-2110-44f5-b3fc-77478d1fd49f","Type":"ContainerStarted","Data":"a5dab8b1877b413551f3564a99d4ca090fe6a1d8e2188d49adb7857063c53e74"} Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.484629 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-8chxh" event={"ID":"1d983c82-449b-4550-986f-5d378560b332","Type":"ContainerStarted","Data":"a3ee714f0b04b4a75e6a38785c795536e296baeb3048365079d0f439bfee5fef"} Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.523386 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4j4dd" event={"ID":"9cf38d4d-d415-4e81-85c3-bb2c65f7423b","Type":"ContainerStarted","Data":"b51857455049666c146d6b5b8da4cc3315482c30dd535fc77940ee6e20c648bf"} Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.525883 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:44 crc kubenswrapper[5110]: E0126 00:11:44.549979 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:45.049946659 +0000 UTC m=+162.278845268 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.583309 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-crnjv" event={"ID":"77aa983a-f3c1-4799-a84b-c3c7a381a1bc","Type":"ContainerStarted","Data":"a0629fea47e1b3b1d8ba962c8797285f949bda78a4c0d9c90e3b58f2a4cc8ee6"} Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.619180 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-55st2" event={"ID":"5f875209-3895-465c-9992-37ef91c4dda9","Type":"ContainerStarted","Data":"307c0149827c9e66d97fb5c3c4c640f78f6f6af107ac65b4ad579e75734c63cf"} Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.619307 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-8ssgr" Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.619833 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-vd2k2" Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.628151 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:44 crc kubenswrapper[5110]: E0126 00:11:44.628450 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:45.128419148 +0000 UTC m=+162.357317757 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.628529 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:44 crc kubenswrapper[5110]: E0126 00:11:44.630631 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:45.130622741 +0000 UTC m=+162.359521350 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.759689 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:44 crc kubenswrapper[5110]: E0126 00:11:44.760151 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:45.260128436 +0000 UTC m=+162.489027045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.862206 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c2w74"] Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.864035 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-jptld" podStartSLOduration=141.86402252 podStartE2EDuration="2m21.86402252s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:44.854524185 +0000 UTC m=+162.083422804" watchObservedRunningTime="2026-01-26 00:11:44.86402252 +0000 UTC m=+162.092921129" Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.865034 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:44 crc kubenswrapper[5110]: E0126 00:11:44.865601 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:45.365583475 +0000 UTC m=+162.594482084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.908311 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b78f8c8c8-lrx8m"] Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.909280 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e2948eb5-9117-467a-89c9-742dde24b958" containerName="route-controller-manager" Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.909306 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2948eb5-9117-467a-89c9-742dde24b958" containerName="route-controller-manager" Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.909352 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="defa6a4f-7d53-4d4c-b0c7-cfb7489db02c" containerName="collect-profiles" Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.909361 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="defa6a4f-7d53-4d4c-b0c7-cfb7489db02c" containerName="collect-profiles" Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.909484 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="defa6a4f-7d53-4d4c-b0c7-cfb7489db02c" containerName="collect-profiles" Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.909497 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="e2948eb5-9117-467a-89c9-742dde24b958" containerName="route-controller-manager" Jan 26 00:11:44 crc kubenswrapper[5110]: I0126 00:11:44.966020 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:44 crc kubenswrapper[5110]: E0126 00:11:44.966509 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:45.466477462 +0000 UTC m=+162.695376081 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.100352 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-rwwx4"] Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.100702 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-rwwx4"] Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.101680 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:45 crc kubenswrapper[5110]: E0126 00:11:45.102188 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:45.602172555 +0000 UTC m=+162.831071164 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.102320 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b78f8c8c8-lrx8m" Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.114290 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b78f8c8c8-lrx8m"] Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.211126 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.211310 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.211521 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wp88\" (UniqueName: \"kubernetes.io/projected/5bbb8424-18f0-4d1c-8f78-7a42e252cafb-kube-api-access-8wp88\") pod \"route-controller-manager-6b78f8c8c8-lrx8m\" (UID: \"5bbb8424-18f0-4d1c-8f78-7a42e252cafb\") " pod="openshift-route-controller-manager/route-controller-manager-6b78f8c8c8-lrx8m" Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.211565 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bbb8424-18f0-4d1c-8f78-7a42e252cafb-config\") pod \"route-controller-manager-6b78f8c8c8-lrx8m\" (UID: \"5bbb8424-18f0-4d1c-8f78-7a42e252cafb\") " pod="openshift-route-controller-manager/route-controller-manager-6b78f8c8c8-lrx8m" Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.211597 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5bbb8424-18f0-4d1c-8f78-7a42e252cafb-tmp\") pod \"route-controller-manager-6b78f8c8c8-lrx8m\" (UID: \"5bbb8424-18f0-4d1c-8f78-7a42e252cafb\") " pod="openshift-route-controller-manager/route-controller-manager-6b78f8c8c8-lrx8m" Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.211662 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bbb8424-18f0-4d1c-8f78-7a42e252cafb-serving-cert\") pod \"route-controller-manager-6b78f8c8c8-lrx8m\" (UID: \"5bbb8424-18f0-4d1c-8f78-7a42e252cafb\") " pod="openshift-route-controller-manager/route-controller-manager-6b78f8c8c8-lrx8m" Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.211681 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5bbb8424-18f0-4d1c-8f78-7a42e252cafb-client-ca\") pod \"route-controller-manager-6b78f8c8c8-lrx8m\" (UID: \"5bbb8424-18f0-4d1c-8f78-7a42e252cafb\") " pod="openshift-route-controller-manager/route-controller-manager-6b78f8c8c8-lrx8m" Jan 26 00:11:45 crc kubenswrapper[5110]: E0126 00:11:45.211888 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:45.711855756 +0000 UTC m=+162.940754365 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.212060 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.212356 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.212512 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.212682 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.212828 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.216955 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-8ssgr"] Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.218460 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-8ssgr"] Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.343934 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.344023 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8wp88\" (UniqueName: \"kubernetes.io/projected/5bbb8424-18f0-4d1c-8f78-7a42e252cafb-kube-api-access-8wp88\") pod \"route-controller-manager-6b78f8c8c8-lrx8m\" (UID: \"5bbb8424-18f0-4d1c-8f78-7a42e252cafb\") " pod="openshift-route-controller-manager/route-controller-manager-6b78f8c8c8-lrx8m" Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.344074 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bbb8424-18f0-4d1c-8f78-7a42e252cafb-config\") pod \"route-controller-manager-6b78f8c8c8-lrx8m\" (UID: \"5bbb8424-18f0-4d1c-8f78-7a42e252cafb\") " pod="openshift-route-controller-manager/route-controller-manager-6b78f8c8c8-lrx8m" Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.344117 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5bbb8424-18f0-4d1c-8f78-7a42e252cafb-tmp\") pod \"route-controller-manager-6b78f8c8c8-lrx8m\" (UID: \"5bbb8424-18f0-4d1c-8f78-7a42e252cafb\") " pod="openshift-route-controller-manager/route-controller-manager-6b78f8c8c8-lrx8m" Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.344163 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bbb8424-18f0-4d1c-8f78-7a42e252cafb-serving-cert\") pod \"route-controller-manager-6b78f8c8c8-lrx8m\" (UID: \"5bbb8424-18f0-4d1c-8f78-7a42e252cafb\") " pod="openshift-route-controller-manager/route-controller-manager-6b78f8c8c8-lrx8m" Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.344202 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5bbb8424-18f0-4d1c-8f78-7a42e252cafb-client-ca\") pod \"route-controller-manager-6b78f8c8c8-lrx8m\" (UID: \"5bbb8424-18f0-4d1c-8f78-7a42e252cafb\") " pod="openshift-route-controller-manager/route-controller-manager-6b78f8c8c8-lrx8m" Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.347979 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bbb8424-18f0-4d1c-8f78-7a42e252cafb-config\") pod \"route-controller-manager-6b78f8c8c8-lrx8m\" (UID: \"5bbb8424-18f0-4d1c-8f78-7a42e252cafb\") " pod="openshift-route-controller-manager/route-controller-manager-6b78f8c8c8-lrx8m" Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.348861 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5bbb8424-18f0-4d1c-8f78-7a42e252cafb-tmp\") pod \"route-controller-manager-6b78f8c8c8-lrx8m\" (UID: \"5bbb8424-18f0-4d1c-8f78-7a42e252cafb\") " pod="openshift-route-controller-manager/route-controller-manager-6b78f8c8c8-lrx8m" Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.352019 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n929c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:11:45 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 26 00:11:45 crc kubenswrapper[5110]: [+]process-running ok Jan 26 00:11:45 crc kubenswrapper[5110]: healthz check failed Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.352144 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n929c" podUID="49efa7f2-d990-4552-80cf-5d4a72a32ec7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:11:45 crc kubenswrapper[5110]: E0126 00:11:45.367957 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:45.867909989 +0000 UTC m=+163.096808588 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.372090 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5bbb8424-18f0-4d1c-8f78-7a42e252cafb-client-ca\") pod \"route-controller-manager-6b78f8c8c8-lrx8m\" (UID: \"5bbb8424-18f0-4d1c-8f78-7a42e252cafb\") " pod="openshift-route-controller-manager/route-controller-manager-6b78f8c8c8-lrx8m" Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.397066 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bbb8424-18f0-4d1c-8f78-7a42e252cafb-serving-cert\") pod \"route-controller-manager-6b78f8c8c8-lrx8m\" (UID: \"5bbb8424-18f0-4d1c-8f78-7a42e252cafb\") " pod="openshift-route-controller-manager/route-controller-manager-6b78f8c8c8-lrx8m" Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.476486 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:45 crc kubenswrapper[5110]: E0126 00:11:45.476947 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:45.97692108 +0000 UTC m=+163.205819689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.562318 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb" path="/var/lib/kubelet/pods/9c6c5e03-641e-4e9a-9d24-8b4565f9bdbb/volumes" Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.563230 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2948eb5-9117-467a-89c9-742dde24b958" path="/var/lib/kubelet/pods/e2948eb5-9117-467a-89c9-742dde24b958/volumes" Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.588176 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:45 crc kubenswrapper[5110]: E0126 00:11:45.588673 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:46.088657011 +0000 UTC m=+163.317555620 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.626171 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=10.626149535 podStartE2EDuration="10.626149535s" podCreationTimestamp="2026-01-26 00:11:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:45.625132446 +0000 UTC m=+162.854031065" watchObservedRunningTime="2026-01-26 00:11:45.626149535 +0000 UTC m=+162.855048144" Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.626965 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-g57zz" podStartSLOduration=142.626955348 podStartE2EDuration="2m22.626955348s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:45.28251669 +0000 UTC m=+162.511415309" watchObservedRunningTime="2026-01-26 00:11:45.626955348 +0000 UTC m=+162.855853977" Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.671833 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"772c8d38-81bf-4f54-b995-e740d2056ead","Type":"ContainerStarted","Data":"4f3553de869f9bd061feae3488be2b91031388d07af5cc6effb71189aff4d0b1"} Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.685892 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wp88\" (UniqueName: \"kubernetes.io/projected/5bbb8424-18f0-4d1c-8f78-7a42e252cafb-kube-api-access-8wp88\") pod \"route-controller-manager-6b78f8c8c8-lrx8m\" (UID: \"5bbb8424-18f0-4d1c-8f78-7a42e252cafb\") " pod="openshift-route-controller-manager/route-controller-manager-6b78f8c8c8-lrx8m" Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.691064 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:45 crc kubenswrapper[5110]: E0126 00:11:45.692635 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:46.192613987 +0000 UTC m=+163.421512596 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.777603 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2w74" event={"ID":"20cad200-6605-4fac-b28d-b84cd2d74a89","Type":"ContainerStarted","Data":"c106de74666cc50a282e74a521bb87502718ee2c81d38e89d01e089e0a13bd46"} Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.792466 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:45 crc kubenswrapper[5110]: E0126 00:11:45.794387 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:46.294369939 +0000 UTC m=+163.523268538 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.933465 5110 patch_prober.go:28] interesting pod/downloads-747b44746d-77449 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.933569 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-77449" podUID="2438617c-3b33-4b33-971c-afaab481cfe6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.934900 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:45 crc kubenswrapper[5110]: E0126 00:11:45.935344 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:46.435323774 +0000 UTC m=+163.664222383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.936449 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:45 crc kubenswrapper[5110]: E0126 00:11:45.937331 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:46.437321612 +0000 UTC m=+163.666220221 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:45 crc kubenswrapper[5110]: I0126 00:11:45.956411 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b78f8c8c8-lrx8m" Jan 26 00:11:46 crc kubenswrapper[5110]: I0126 00:11:46.089294 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:46 crc kubenswrapper[5110]: E0126 00:11:46.089840 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:46.58979579 +0000 UTC m=+163.818694399 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:46 crc kubenswrapper[5110]: I0126 00:11:46.102312 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4j4dd" podStartSLOduration=143.102295632 podStartE2EDuration="2m23.102295632s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:45.769922252 +0000 UTC m=+162.998820881" watchObservedRunningTime="2026-01-26 00:11:46.102295632 +0000 UTC m=+163.331194241" Jan 26 00:11:46 crc kubenswrapper[5110]: I0126 00:11:46.223570 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:46 crc kubenswrapper[5110]: E0126 00:11:46.224401 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:46.724382172 +0000 UTC m=+163.953280781 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:46 crc kubenswrapper[5110]: I0126 00:11:46.329546 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:46 crc kubenswrapper[5110]: E0126 00:11:46.329848 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:46.829822149 +0000 UTC m=+164.058720758 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:46 crc kubenswrapper[5110]: I0126 00:11:46.361523 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-8chxh" podStartSLOduration=143.361491075 podStartE2EDuration="2m23.361491075s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:46.107225934 +0000 UTC m=+163.336124543" watchObservedRunningTime="2026-01-26 00:11:46.361491075 +0000 UTC m=+163.590389684" Jan 26 00:11:46 crc kubenswrapper[5110]: I0126 00:11:46.420547 5110 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-bjwbh container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Jan 26 00:11:46 crc kubenswrapper[5110]: I0126 00:11:46.420624 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-547dbd544d-bjwbh" podUID="4ba4d5a4-755b-4d6e-b250-cd705b244775" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Jan 26 00:11:46 crc kubenswrapper[5110]: I0126 00:11:46.431791 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:46 crc kubenswrapper[5110]: E0126 00:11:46.432186 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:46.932173108 +0000 UTC m=+164.161071717 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:46 crc kubenswrapper[5110]: I0126 00:11:46.480961 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-n929c" Jan 26 00:11:46 crc kubenswrapper[5110]: I0126 00:11:46.487750 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-crnjv" podStartSLOduration=143.487731795 podStartE2EDuration="2m23.487731795s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:46.360538427 +0000 UTC m=+163.589437056" watchObservedRunningTime="2026-01-26 00:11:46.487731795 +0000 UTC m=+163.716630404" Jan 26 00:11:46 crc kubenswrapper[5110]: I0126 00:11:46.494372 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-55st2" podStartSLOduration=143.494346816 podStartE2EDuration="2m23.494346816s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:46.486976423 +0000 UTC m=+163.715875032" watchObservedRunningTime="2026-01-26 00:11:46.494346816 +0000 UTC m=+163.723245425" Jan 26 00:11:46 crc kubenswrapper[5110]: I0126 00:11:46.502977 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ltvhd"] Jan 26 00:11:46 crc kubenswrapper[5110]: I0126 00:11:46.506186 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2dlcz"] Jan 26 00:11:46 crc kubenswrapper[5110]: I0126 00:11:46.515585 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ssztd"] Jan 26 00:11:46 crc kubenswrapper[5110]: I0126 00:11:46.528128 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-8ndzr" podStartSLOduration=143.528099192 podStartE2EDuration="2m23.528099192s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:46.527827444 +0000 UTC m=+163.756726053" watchObservedRunningTime="2026-01-26 00:11:46.528099192 +0000 UTC m=+163.756997801" Jan 26 00:11:46 crc kubenswrapper[5110]: I0126 00:11:46.530840 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 26 00:11:46 crc kubenswrapper[5110]: I0126 00:11:46.533002 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:46 crc kubenswrapper[5110]: E0126 00:11:46.534304 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:47.034282121 +0000 UTC m=+164.263180730 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:46 crc kubenswrapper[5110]: I0126 00:11:46.635331 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:46 crc kubenswrapper[5110]: E0126 00:11:46.635829 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:47.135807906 +0000 UTC m=+164.364706515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:46 crc kubenswrapper[5110]: W0126 00:11:46.713417 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24c8aa2b_e367_4f40_92d2_ff31d5cbce2d.slice/crio-3fe162bddca2db0ac193f6255f9f759e282c846fa80b66d348708f624a93d157 WatchSource:0}: Error finding container 3fe162bddca2db0ac193f6255f9f759e282c846fa80b66d348708f624a93d157: Status 404 returned error can't find the container with id 3fe162bddca2db0ac193f6255f9f759e282c846fa80b66d348708f624a93d157 Jan 26 00:11:46 crc kubenswrapper[5110]: I0126 00:11:46.745003 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:46 crc kubenswrapper[5110]: E0126 00:11:46.745227 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:47.245164028 +0000 UTC m=+164.474062637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:46 crc kubenswrapper[5110]: I0126 00:11:46.751432 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:46 crc kubenswrapper[5110]: W0126 00:11:46.749018 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f9f1edc_9805_4975_8e01_a3428d24cc00.slice/crio-817c9ccabaef4b0a230f8773ff7f8a82b43fb582a546e3a28060aad6aa01dd8a WatchSource:0}: Error finding container 817c9ccabaef4b0a230f8773ff7f8a82b43fb582a546e3a28060aad6aa01dd8a: Status 404 returned error can't find the container with id 817c9ccabaef4b0a230f8773ff7f8a82b43fb582a546e3a28060aad6aa01dd8a Jan 26 00:11:46 crc kubenswrapper[5110]: E0126 00:11:46.751996 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:47.251978005 +0000 UTC m=+164.480876614 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:46 crc kubenswrapper[5110]: I0126 00:11:46.854367 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:46 crc kubenswrapper[5110]: E0126 00:11:46.855266 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:47.35522655 +0000 UTC m=+164.584125159 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:46 crc kubenswrapper[5110]: I0126 00:11:46.859229 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:46 crc kubenswrapper[5110]: E0126 00:11:46.859842 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:47.359817073 +0000 UTC m=+164.588715672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:46 crc kubenswrapper[5110]: I0126 00:11:46.964382 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:46 crc kubenswrapper[5110]: E0126 00:11:46.964825 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:47.464771678 +0000 UTC m=+164.693670287 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:47 crc kubenswrapper[5110]: I0126 00:11:47.066887 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:47 crc kubenswrapper[5110]: E0126 00:11:47.067333 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:47.567319492 +0000 UTC m=+164.796218101 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:47 crc kubenswrapper[5110]: I0126 00:11:47.168106 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:47 crc kubenswrapper[5110]: E0126 00:11:47.168489 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:47.668459837 +0000 UTC m=+164.897358446 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:47 crc kubenswrapper[5110]: W0126 00:11:47.267196 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96073c20_1f42_44fa_a922_6681dab132ef.slice/crio-1ea07b88848eedd72f5d3c5365a66f9edfb36655e2a77cbc3c9333a9d85944b2 WatchSource:0}: Error finding container 1ea07b88848eedd72f5d3c5365a66f9edfb36655e2a77cbc3c9333a9d85944b2: Status 404 returned error can't find the container with id 1ea07b88848eedd72f5d3c5365a66f9edfb36655e2a77cbc3c9333a9d85944b2 Jan 26 00:11:47 crc kubenswrapper[5110]: W0126 00:11:47.267555 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod168e3c2a_eb5e_451a_bba9_93e41fb1e958.slice/crio-dcc05bbda57050aa25fe308b52cb9cb6253ae4d76a0f4da40a76164897a3d634 WatchSource:0}: Error finding container dcc05bbda57050aa25fe308b52cb9cb6253ae4d76a0f4da40a76164897a3d634: Status 404 returned error can't find the container with id dcc05bbda57050aa25fe308b52cb9cb6253ae4d76a0f4da40a76164897a3d634 Jan 26 00:11:47 crc kubenswrapper[5110]: I0126 00:11:47.270089 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:47 crc kubenswrapper[5110]: E0126 00:11:47.279193 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:47.779158287 +0000 UTC m=+165.008056906 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:47 crc kubenswrapper[5110]: W0126 00:11:47.280445 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod063f39d3_4de8_4c26_99fc_3148a8738541.slice/crio-d9ecbc99cba0f6ba00965177aa64e3f0d92941a2af113982c8b4a4fb2c6d8d73 WatchSource:0}: Error finding container d9ecbc99cba0f6ba00965177aa64e3f0d92941a2af113982c8b4a4fb2c6d8d73: Status 404 returned error can't find the container with id d9ecbc99cba0f6ba00965177aa64e3f0d92941a2af113982c8b4a4fb2c6d8d73 Jan 26 00:11:47 crc kubenswrapper[5110]: I0126 00:11:47.301106 5110 ???:1] "http: TLS handshake error from 192.168.126.11:38070: no serving certificate available for the kubelet" Jan 26 00:11:47 crc kubenswrapper[5110]: I0126 00:11:47.372125 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:47 crc kubenswrapper[5110]: E0126 00:11:47.372454 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:47.872399743 +0000 UTC m=+165.101298352 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:47 crc kubenswrapper[5110]: I0126 00:11:47.372775 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:47 crc kubenswrapper[5110]: E0126 00:11:47.373315 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:47.873287219 +0000 UTC m=+165.102186008 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:47 crc kubenswrapper[5110]: I0126 00:11:47.474513 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:47 crc kubenswrapper[5110]: E0126 00:11:47.475156 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:47.975126723 +0000 UTC m=+165.204025352 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:47 crc kubenswrapper[5110]: E0126 00:11:47.577844 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:48.077765141 +0000 UTC m=+165.306663770 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:47 crc kubenswrapper[5110]: I0126 00:11:47.577286 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:47 crc kubenswrapper[5110]: I0126 00:11:47.679930 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:47 crc kubenswrapper[5110]: E0126 00:11:47.680147 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:48.18010581 +0000 UTC m=+165.409004529 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:47 crc kubenswrapper[5110]: I0126 00:11:47.680615 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:47 crc kubenswrapper[5110]: E0126 00:11:47.681014 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:48.180997156 +0000 UTC m=+165.409895765 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:47 crc kubenswrapper[5110]: I0126 00:11:47.781924 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:47 crc kubenswrapper[5110]: E0126 00:11:47.782197 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:48.28213812 +0000 UTC m=+165.511036749 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:47 crc kubenswrapper[5110]: I0126 00:11:47.884380 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:47 crc kubenswrapper[5110]: E0126 00:11:47.884736 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:48.384720686 +0000 UTC m=+165.613619295 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:47 crc kubenswrapper[5110]: I0126 00:11:47.986537 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:47 crc kubenswrapper[5110]: E0126 00:11:47.986832 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:48.486776147 +0000 UTC m=+165.715674766 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:47 crc kubenswrapper[5110]: I0126 00:11:47.987373 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:47 crc kubenswrapper[5110]: E0126 00:11:47.988370 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:48.488333622 +0000 UTC m=+165.717232271 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.089102 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:48 crc kubenswrapper[5110]: E0126 00:11:48.089373 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:48.589332692 +0000 UTC m=+165.818231321 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.089862 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:48 crc kubenswrapper[5110]: E0126 00:11:48.090402 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:48.590373632 +0000 UTC m=+165.819272251 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.191447 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:48 crc kubenswrapper[5110]: E0126 00:11:48.191752 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:48.691708032 +0000 UTC m=+165.920606641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.293630 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:48 crc kubenswrapper[5110]: E0126 00:11:48.294199 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:48.794175914 +0000 UTC m=+166.023074533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.355997 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ssztd" event={"ID":"4f9f1edc-9805-4975-8e01-a3428d24cc00","Type":"ContainerStarted","Data":"817c9ccabaef4b0a230f8773ff7f8a82b43fb582a546e3a28060aad6aa01dd8a"} Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.356088 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.356239 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-n929c" Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.356286 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-67svm" Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.356318 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.360248 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.360772 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.373788 5110 patch_prober.go:28] interesting pod/downloads-747b44746d-77449 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.373904 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-77449" podUID="2438617c-3b33-4b33-971c-afaab481cfe6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.399238 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.399504 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/919cb2e5-eacd-4b72-b8e7-e5e85e582f40-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"919cb2e5-eacd-4b72-b8e7-e5e85e582f40\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.399832 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/919cb2e5-eacd-4b72-b8e7-e5e85e582f40-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"919cb2e5-eacd-4b72-b8e7-e5e85e582f40\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 26 00:11:48 crc kubenswrapper[5110]: E0126 00:11:48.401894 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:48.901864608 +0000 UTC m=+166.130763217 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.439985 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dlcz" event={"ID":"4276f77a-a21c-47f2-9902-e37c3ab865f5","Type":"ContainerStarted","Data":"8d921239e7a5def225333d6a4aef3f5dcaf7bc2d44478541ac5e5e15c3c1533c"} Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.440084 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-7fnwf" Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.440129 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-vl2cl" Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.440141 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-vl2cl" event={"ID":"3451ac5b-8ad7-4419-b262-ec54012b9dc6","Type":"ContainerStarted","Data":"da8d8368476ad673d458b626908b8e3269bd48e5f843865f36d0a35041147e89"} Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.440153 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ltvhd" event={"ID":"24c8aa2b-e367-4f40-92d2-ff31d5cbce2d","Type":"ContainerStarted","Data":"3fe162bddca2db0ac193f6255f9f759e282c846fa80b66d348708f624a93d157"} Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.440164 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d6ths" event={"ID":"168e3c2a-eb5e-451a-bba9-93e41fb1e958","Type":"ContainerStarted","Data":"dcc05bbda57050aa25fe308b52cb9cb6253ae4d76a0f4da40a76164897a3d634"} Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.440180 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-684d994d47-8c2v7"] Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.440195 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsblt" event={"ID":"96073c20-1f42-44fa-a922-6681dab132ef","Type":"ContainerStarted","Data":"1ea07b88848eedd72f5d3c5365a66f9edfb36655e2a77cbc3c9333a9d85944b2"} Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.440210 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k2xlk"] Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.440224 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b78f8c8c8-lrx8m" event={"ID":"5bbb8424-18f0-4d1c-8f78-7a42e252cafb","Type":"ContainerStarted","Data":"2dd4d4861434d396bbb7f1cf4c5c00653695212039197a6bcfa2bac39ca97d4d"} Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.440237 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b78f8c8c8-lrx8m"] Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.440248 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wsblt"] Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.440259 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5g6gk" event={"ID":"063f39d3-4de8-4c26-99fc-3148a8738541","Type":"ContainerStarted","Data":"d9ecbc99cba0f6ba00965177aa64e3f0d92941a2af113982c8b4a4fb2c6d8d73"} Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.440271 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-684d994d47-8c2v7" event={"ID":"d38403be-a3f6-43a0-9957-39bc82a8870c","Type":"ContainerStarted","Data":"e40aaf37cebda34064bfdf5608b88c7e1615c159b4cfec8165d74c703f03145f"} Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.440284 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d6ths"] Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.440296 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k2xlk" event={"ID":"10001c1a-643a-49dd-b5ba-88376896f044","Type":"ContainerStarted","Data":"a146d7ebbeb2f70c4d0ada534a15cdc2ecaf5c1baa76a98a96a1e169773f2d9a"} Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.440307 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5g6gk"] Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.502495 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/919cb2e5-eacd-4b72-b8e7-e5e85e582f40-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"919cb2e5-eacd-4b72-b8e7-e5e85e582f40\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.502626 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.502703 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/919cb2e5-eacd-4b72-b8e7-e5e85e582f40-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"919cb2e5-eacd-4b72-b8e7-e5e85e582f40\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.502844 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/919cb2e5-eacd-4b72-b8e7-e5e85e582f40-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"919cb2e5-eacd-4b72-b8e7-e5e85e582f40\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 26 00:11:48 crc kubenswrapper[5110]: E0126 00:11:48.504373 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:49.004347031 +0000 UTC m=+166.233245700 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.544155 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/919cb2e5-eacd-4b72-b8e7-e5e85e582f40-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"919cb2e5-eacd-4b72-b8e7-e5e85e582f40\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.603308 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-vl2cl" podStartSLOduration=26.603289282 podStartE2EDuration="26.603289282s" podCreationTimestamp="2026-01-26 00:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:48.602764217 +0000 UTC m=+165.831662846" watchObservedRunningTime="2026-01-26 00:11:48.603289282 +0000 UTC m=+165.832187891" Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.607042 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:48 crc kubenswrapper[5110]: E0126 00:11:48.607166 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:49.107152453 +0000 UTC m=+166.336051062 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.607436 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:48 crc kubenswrapper[5110]: E0126 00:11:48.607926 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:49.107908625 +0000 UTC m=+166.336807224 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.691665 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.709002 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:48 crc kubenswrapper[5110]: E0126 00:11:48.709370 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:49.209345868 +0000 UTC m=+166.438244477 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.810222 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:48 crc kubenswrapper[5110]: E0126 00:11:48.810597 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:49.310583285 +0000 UTC m=+166.539481894 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:48 crc kubenswrapper[5110]: I0126 00:11:48.911503 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:48 crc kubenswrapper[5110]: E0126 00:11:48.912119 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:49.41209798 +0000 UTC m=+166.640996589 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:49 crc kubenswrapper[5110]: I0126 00:11:49.013628 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:49 crc kubenswrapper[5110]: E0126 00:11:49.014382 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:49.514366637 +0000 UTC m=+166.743265246 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:49 crc kubenswrapper[5110]: I0126 00:11:49.091082 5110 generic.go:358] "Generic (PLEG): container finished" podID="20cad200-6605-4fac-b28d-b84cd2d74a89" containerID="552ca27a832505b24105774c0c07e179c127848d2e354cb133fa690642f7f832" exitCode=0 Jan 26 00:11:49 crc kubenswrapper[5110]: I0126 00:11:49.092067 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2w74" event={"ID":"20cad200-6605-4fac-b28d-b84cd2d74a89","Type":"ContainerDied","Data":"552ca27a832505b24105774c0c07e179c127848d2e354cb133fa690642f7f832"} Jan 26 00:11:49 crc kubenswrapper[5110]: I0126 00:11:49.114375 5110 generic.go:358] "Generic (PLEG): container finished" podID="772c8d38-81bf-4f54-b995-e740d2056ead" containerID="4f3553de869f9bd061feae3488be2b91031388d07af5cc6effb71189aff4d0b1" exitCode=0 Jan 26 00:11:49 crc kubenswrapper[5110]: I0126 00:11:49.114742 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"772c8d38-81bf-4f54-b995-e740d2056ead","Type":"ContainerDied","Data":"4f3553de869f9bd061feae3488be2b91031388d07af5cc6effb71189aff4d0b1"} Jan 26 00:11:49 crc kubenswrapper[5110]: I0126 00:11:49.115587 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:49 crc kubenswrapper[5110]: E0126 00:11:49.116098 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:49.616048097 +0000 UTC m=+166.844946706 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:49 crc kubenswrapper[5110]: I0126 00:11:49.224501 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:49 crc kubenswrapper[5110]: E0126 00:11:49.227574 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:49.727560041 +0000 UTC m=+166.956458650 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:49 crc kubenswrapper[5110]: I0126 00:11:49.326511 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:49 crc kubenswrapper[5110]: E0126 00:11:49.326682 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:49.826652356 +0000 UTC m=+167.055550965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:49 crc kubenswrapper[5110]: I0126 00:11:49.327382 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:49 crc kubenswrapper[5110]: E0126 00:11:49.327756 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:49.827744918 +0000 UTC m=+167.056643527 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:49 crc kubenswrapper[5110]: I0126 00:11:49.492002 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:49 crc kubenswrapper[5110]: E0126 00:11:49.492164 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:49.992122301 +0000 UTC m=+167.221020910 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:49 crc kubenswrapper[5110]: I0126 00:11:49.492669 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:49 crc kubenswrapper[5110]: E0126 00:11:49.493189 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:49.99315876 +0000 UTC m=+167.222057369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:49 crc kubenswrapper[5110]: I0126 00:11:49.697777 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:49 crc kubenswrapper[5110]: E0126 00:11:49.697908 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:50.19787972 +0000 UTC m=+167.426778329 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:49 crc kubenswrapper[5110]: I0126 00:11:49.698181 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:49 crc kubenswrapper[5110]: E0126 00:11:49.698647 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:50.198639052 +0000 UTC m=+167.427537661 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:49 crc kubenswrapper[5110]: I0126 00:11:49.799850 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:49 crc kubenswrapper[5110]: E0126 00:11:49.800160 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:50.300139226 +0000 UTC m=+167.529037835 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:49 crc kubenswrapper[5110]: I0126 00:11:49.922051 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:49 crc kubenswrapper[5110]: E0126 00:11:49.923097 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:50.42307325 +0000 UTC m=+167.651971849 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:50 crc kubenswrapper[5110]: I0126 00:11:50.024219 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:50 crc kubenswrapper[5110]: E0126 00:11:50.025219 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:50.525194722 +0000 UTC m=+167.754093331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:50 crc kubenswrapper[5110]: I0126 00:11:50.237841 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:50 crc kubenswrapper[5110]: E0126 00:11:50.238623 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:50.738603543 +0000 UTC m=+167.967502152 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:50 crc kubenswrapper[5110]: I0126 00:11:50.258781 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d6ths" event={"ID":"168e3c2a-eb5e-451a-bba9-93e41fb1e958","Type":"ContainerStarted","Data":"dd7529fbb5a18e9bc6a2aa71af30a8d697bf687529849e01d06feeac3220fda4"} Jan 26 00:11:50 crc kubenswrapper[5110]: I0126 00:11:50.270463 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-684d994d47-8c2v7" event={"ID":"d38403be-a3f6-43a0-9957-39bc82a8870c","Type":"ContainerStarted","Data":"7db8f11014a8cca9f97ca060a0b931722b85e6cdcbd2bef4f0dd9ba566249c0f"} Jan 26 00:11:50 crc kubenswrapper[5110]: I0126 00:11:50.271941 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-684d994d47-8c2v7" Jan 26 00:11:50 crc kubenswrapper[5110]: I0126 00:11:50.276859 5110 patch_prober.go:28] interesting pod/controller-manager-684d994d47-8c2v7 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Jan 26 00:11:50 crc kubenswrapper[5110]: I0126 00:11:50.276954 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-684d994d47-8c2v7" podUID="d38403be-a3f6-43a0-9957-39bc82a8870c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Jan 26 00:11:50 crc kubenswrapper[5110]: I0126 00:11:50.282472 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k2xlk" event={"ID":"10001c1a-643a-49dd-b5ba-88376896f044","Type":"ContainerStarted","Data":"a78fa1842b1c8090455602511267f4ae213fc67d96d93cf3c41e2dc83010e27d"} Jan 26 00:11:50 crc kubenswrapper[5110]: I0126 00:11:50.339563 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:50 crc kubenswrapper[5110]: E0126 00:11:50.340880 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:50.840855479 +0000 UTC m=+168.069754088 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:50 crc kubenswrapper[5110]: I0126 00:11:50.441479 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:50 crc kubenswrapper[5110]: E0126 00:11:50.441991 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:50.941972133 +0000 UTC m=+168.170870742 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:50 crc kubenswrapper[5110]: I0126 00:11:50.453670 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-684d994d47-8c2v7" podStartSLOduration=10.45364616 podStartE2EDuration="10.45364616s" podCreationTimestamp="2026-01-26 00:11:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:50.443878318 +0000 UTC m=+167.672776937" watchObservedRunningTime="2026-01-26 00:11:50.45364616 +0000 UTC m=+167.682544769" Jan 26 00:11:50 crc kubenswrapper[5110]: I0126 00:11:50.667229 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:50 crc kubenswrapper[5110]: E0126 00:11:50.668204 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:51.168174113 +0000 UTC m=+168.397072722 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:50 crc kubenswrapper[5110]: I0126 00:11:50.769239 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:50 crc kubenswrapper[5110]: E0126 00:11:50.770389 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:51.270370077 +0000 UTC m=+168.499268686 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:50 crc kubenswrapper[5110]: I0126 00:11:50.870787 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:50 crc kubenswrapper[5110]: E0126 00:11:50.871376 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:51.371351227 +0000 UTC m=+168.600249836 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:51 crc kubenswrapper[5110]: I0126 00:11:51.172270 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:51 crc kubenswrapper[5110]: E0126 00:11:51.172719 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:51.67270057 +0000 UTC m=+168.901599179 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:51 crc kubenswrapper[5110]: I0126 00:11:51.277312 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:51 crc kubenswrapper[5110]: E0126 00:11:51.277689 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:51.777669165 +0000 UTC m=+169.006567764 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:51 crc kubenswrapper[5110]: I0126 00:11:51.291935 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 26 00:11:51 crc kubenswrapper[5110]: I0126 00:11:51.312420 5110 generic.go:358] "Generic (PLEG): container finished" podID="96073c20-1f42-44fa-a922-6681dab132ef" containerID="60f756369d575eb8d001fb9559365610b3025359a9a19634d0d08ad74aa47c53" exitCode=0 Jan 26 00:11:51 crc kubenswrapper[5110]: I0126 00:11:51.312578 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsblt" event={"ID":"96073c20-1f42-44fa-a922-6681dab132ef","Type":"ContainerDied","Data":"60f756369d575eb8d001fb9559365610b3025359a9a19634d0d08ad74aa47c53"} Jan 26 00:11:51 crc kubenswrapper[5110]: I0126 00:11:51.323739 5110 generic.go:358] "Generic (PLEG): container finished" podID="24c8aa2b-e367-4f40-92d2-ff31d5cbce2d" containerID="a633ec49b1fb6a296a7bfb42cc57776595511abdca806c1afb00952be8ff873a" exitCode=0 Jan 26 00:11:51 crc kubenswrapper[5110]: I0126 00:11:51.359282 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ltvhd" event={"ID":"24c8aa2b-e367-4f40-92d2-ff31d5cbce2d","Type":"ContainerDied","Data":"a633ec49b1fb6a296a7bfb42cc57776595511abdca806c1afb00952be8ff873a"} Jan 26 00:11:51 crc kubenswrapper[5110]: I0126 00:11:51.378064 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b78f8c8c8-lrx8m" event={"ID":"5bbb8424-18f0-4d1c-8f78-7a42e252cafb","Type":"ContainerStarted","Data":"f4a7fdbfd81526c18db00a68c0465b19bb2ffc7fb1d2ca10cac82ebd84a40e85"} Jan 26 00:11:51 crc kubenswrapper[5110]: I0126 00:11:51.379322 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-6b78f8c8c8-lrx8m" Jan 26 00:11:51 crc kubenswrapper[5110]: I0126 00:11:51.381245 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:51 crc kubenswrapper[5110]: E0126 00:11:51.382107 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:51.882091604 +0000 UTC m=+169.110990203 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:51 crc kubenswrapper[5110]: I0126 00:11:51.401522 5110 generic.go:358] "Generic (PLEG): container finished" podID="063f39d3-4de8-4c26-99fc-3148a8738541" containerID="ffd77a85e65077cade19a22d9d8d77ab99c384e08f01758d32ef90ee0db56f01" exitCode=0 Jan 26 00:11:51 crc kubenswrapper[5110]: I0126 00:11:51.401667 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5g6gk" event={"ID":"063f39d3-4de8-4c26-99fc-3148a8738541","Type":"ContainerDied","Data":"ffd77a85e65077cade19a22d9d8d77ab99c384e08f01758d32ef90ee0db56f01"} Jan 26 00:11:51 crc kubenswrapper[5110]: I0126 00:11:51.424761 5110 generic.go:358] "Generic (PLEG): container finished" podID="4f9f1edc-9805-4975-8e01-a3428d24cc00" containerID="4364a397d7c04cbd52fc6ca0232c219109b264ea769149e1b006c8c223917ff0" exitCode=0 Jan 26 00:11:51 crc kubenswrapper[5110]: I0126 00:11:51.424978 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ssztd" event={"ID":"4f9f1edc-9805-4975-8e01-a3428d24cc00","Type":"ContainerDied","Data":"4364a397d7c04cbd52fc6ca0232c219109b264ea769149e1b006c8c223917ff0"} Jan 26 00:11:51 crc kubenswrapper[5110]: I0126 00:11:51.450726 5110 generic.go:358] "Generic (PLEG): container finished" podID="10001c1a-643a-49dd-b5ba-88376896f044" containerID="a78fa1842b1c8090455602511267f4ae213fc67d96d93cf3c41e2dc83010e27d" exitCode=0 Jan 26 00:11:51 crc kubenswrapper[5110]: I0126 00:11:51.450829 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k2xlk" event={"ID":"10001c1a-643a-49dd-b5ba-88376896f044","Type":"ContainerDied","Data":"a78fa1842b1c8090455602511267f4ae213fc67d96d93cf3c41e2dc83010e27d"} Jan 26 00:11:51 crc kubenswrapper[5110]: I0126 00:11:51.453266 5110 generic.go:358] "Generic (PLEG): container finished" podID="4276f77a-a21c-47f2-9902-e37c3ab865f5" containerID="9636bde2ac0001cb1087d339beadf9f5cde94dcccb034a0ea012d3112d49aefc" exitCode=0 Jan 26 00:11:51 crc kubenswrapper[5110]: I0126 00:11:51.453333 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dlcz" event={"ID":"4276f77a-a21c-47f2-9902-e37c3ab865f5","Type":"ContainerDied","Data":"9636bde2ac0001cb1087d339beadf9f5cde94dcccb034a0ea012d3112d49aefc"} Jan 26 00:11:51 crc kubenswrapper[5110]: I0126 00:11:51.482010 5110 generic.go:358] "Generic (PLEG): container finished" podID="168e3c2a-eb5e-451a-bba9-93e41fb1e958" containerID="dd7529fbb5a18e9bc6a2aa71af30a8d697bf687529849e01d06feeac3220fda4" exitCode=0 Jan 26 00:11:51 crc kubenswrapper[5110]: I0126 00:11:51.482497 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:51 crc kubenswrapper[5110]: I0126 00:11:51.482826 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d6ths" event={"ID":"168e3c2a-eb5e-451a-bba9-93e41fb1e958","Type":"ContainerDied","Data":"dd7529fbb5a18e9bc6a2aa71af30a8d697bf687529849e01d06feeac3220fda4"} Jan 26 00:11:51 crc kubenswrapper[5110]: E0126 00:11:51.483281 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:51.983260969 +0000 UTC m=+169.212159578 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:51 crc kubenswrapper[5110]: I0126 00:11:51.514745 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6b78f8c8c8-lrx8m" podStartSLOduration=10.514729289 podStartE2EDuration="10.514729289s" podCreationTimestamp="2026-01-26 00:11:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:51.512062402 +0000 UTC m=+168.740961011" watchObservedRunningTime="2026-01-26 00:11:51.514729289 +0000 UTC m=+168.743627898" Jan 26 00:11:51 crc kubenswrapper[5110]: I0126 00:11:51.584729 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:51 crc kubenswrapper[5110]: E0126 00:11:51.585218 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:52.085194476 +0000 UTC m=+169.314093085 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:51 crc kubenswrapper[5110]: I0126 00:11:51.686939 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:51 crc kubenswrapper[5110]: E0126 00:11:51.688741 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:52.188710239 +0000 UTC m=+169.417608858 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:51 crc kubenswrapper[5110]: I0126 00:11:51.789180 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:51 crc kubenswrapper[5110]: E0126 00:11:51.789565 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:52.289551025 +0000 UTC m=+169.518449634 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:51 crc kubenswrapper[5110]: I0126 00:11:51.871267 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-684d994d47-8c2v7" Jan 26 00:11:51 crc kubenswrapper[5110]: I0126 00:11:51.890612 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:51 crc kubenswrapper[5110]: E0126 00:11:51.890862 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:52.390826093 +0000 UTC m=+169.619724702 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:51 crc kubenswrapper[5110]: I0126 00:11:51.891059 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:51 crc kubenswrapper[5110]: E0126 00:11:51.891780 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:52.39177202 +0000 UTC m=+169.620670629 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:51 crc kubenswrapper[5110]: I0126 00:11:51.954873 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6b78f8c8c8-lrx8m" Jan 26 00:11:52 crc kubenswrapper[5110]: E0126 00:11:52.017861 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:52.517833775 +0000 UTC m=+169.746732384 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:52 crc kubenswrapper[5110]: I0126 00:11:52.017714 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:52 crc kubenswrapper[5110]: I0126 00:11:52.018539 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:52 crc kubenswrapper[5110]: E0126 00:11:52.019133 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:52.519121903 +0000 UTC m=+169.748020512 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:52 crc kubenswrapper[5110]: I0126 00:11:52.120043 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:52 crc kubenswrapper[5110]: E0126 00:11:52.120822 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:52.620767081 +0000 UTC m=+169.849665690 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:52 crc kubenswrapper[5110]: I0126 00:11:52.144931 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 26 00:11:52 crc kubenswrapper[5110]: I0126 00:11:52.258748 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/772c8d38-81bf-4f54-b995-e740d2056ead-kube-api-access\") pod \"772c8d38-81bf-4f54-b995-e740d2056ead\" (UID: \"772c8d38-81bf-4f54-b995-e740d2056ead\") " Jan 26 00:11:52 crc kubenswrapper[5110]: I0126 00:11:52.258914 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/772c8d38-81bf-4f54-b995-e740d2056ead-kubelet-dir\") pod \"772c8d38-81bf-4f54-b995-e740d2056ead\" (UID: \"772c8d38-81bf-4f54-b995-e740d2056ead\") " Jan 26 00:11:52 crc kubenswrapper[5110]: I0126 00:11:52.259411 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:52 crc kubenswrapper[5110]: E0126 00:11:52.259940 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:52.759920055 +0000 UTC m=+169.988818664 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:52 crc kubenswrapper[5110]: I0126 00:11:52.260150 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/772c8d38-81bf-4f54-b995-e740d2056ead-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "772c8d38-81bf-4f54-b995-e740d2056ead" (UID: "772c8d38-81bf-4f54-b995-e740d2056ead"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:11:52 crc kubenswrapper[5110]: I0126 00:11:52.366754 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:52 crc kubenswrapper[5110]: I0126 00:11:52.367225 5110 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/772c8d38-81bf-4f54-b995-e740d2056ead-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:52 crc kubenswrapper[5110]: E0126 00:11:52.367331 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:52.867302749 +0000 UTC m=+170.096201358 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:52 crc kubenswrapper[5110]: I0126 00:11:52.487823 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:52 crc kubenswrapper[5110]: E0126 00:11:52.488343 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:52.988322228 +0000 UTC m=+170.217220837 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:52 crc kubenswrapper[5110]: I0126 00:11:52.613280 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:52 crc kubenswrapper[5110]: E0126 00:11:52.623510 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:53.123470076 +0000 UTC m=+170.352368685 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:52 crc kubenswrapper[5110]: I0126 00:11:52.778484 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:52 crc kubenswrapper[5110]: E0126 00:11:52.779104 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:53.279089135 +0000 UTC m=+170.507987744 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:52 crc kubenswrapper[5110]: I0126 00:11:52.785435 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"919cb2e5-eacd-4b72-b8e7-e5e85e582f40","Type":"ContainerStarted","Data":"eb6f760e886a7c9b9b3506f1879a23123a056a1431ab3a9e4e6d7e59bf6856d7"} Jan 26 00:11:52 crc kubenswrapper[5110]: I0126 00:11:52.788401 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 26 00:11:52 crc kubenswrapper[5110]: I0126 00:11:52.788490 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"772c8d38-81bf-4f54-b995-e740d2056ead","Type":"ContainerDied","Data":"edadc16f949d30cd86fdfb92298a7037e3a4562719832fdb69df8f391344d5aa"} Jan 26 00:11:52 crc kubenswrapper[5110]: I0126 00:11:52.788514 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edadc16f949d30cd86fdfb92298a7037e3a4562719832fdb69df8f391344d5aa" Jan 26 00:11:52 crc kubenswrapper[5110]: I0126 00:11:52.918562 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/772c8d38-81bf-4f54-b995-e740d2056ead-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "772c8d38-81bf-4f54-b995-e740d2056ead" (UID: "772c8d38-81bf-4f54-b995-e740d2056ead"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:52 crc kubenswrapper[5110]: I0126 00:11:52.938615 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:52 crc kubenswrapper[5110]: E0126 00:11:52.938732 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:53.43870788 +0000 UTC m=+170.667606489 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:52 crc kubenswrapper[5110]: I0126 00:11:52.940141 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:52 crc kubenswrapper[5110]: I0126 00:11:52.940341 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/772c8d38-81bf-4f54-b995-e740d2056ead-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:52 crc kubenswrapper[5110]: E0126 00:11:52.941513 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:53.441503271 +0000 UTC m=+170.670401880 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:52 crc kubenswrapper[5110]: I0126 00:11:52.995659 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-bjwbh" Jan 26 00:11:53 crc kubenswrapper[5110]: I0126 00:11:53.122045 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:53 crc kubenswrapper[5110]: E0126 00:11:53.122860 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:53.622818834 +0000 UTC m=+170.851717493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:53 crc kubenswrapper[5110]: I0126 00:11:53.223779 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:53 crc kubenswrapper[5110]: E0126 00:11:53.224188 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:53.724172304 +0000 UTC m=+170.953070913 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:53 crc kubenswrapper[5110]: I0126 00:11:53.239720 5110 patch_prober.go:28] interesting pod/console-64d44f6ddf-tjq6l container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 26 00:11:53 crc kubenswrapper[5110]: I0126 00:11:53.239766 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-tjq6l" podUID="766da705-879b-4d57-a34d-43eca4c9da19" containerName="console" probeResult="failure" output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 26 00:11:53 crc kubenswrapper[5110]: I0126 00:11:53.325139 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:53 crc kubenswrapper[5110]: E0126 00:11:53.325335 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:53.825301628 +0000 UTC m=+171.054200237 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:53 crc kubenswrapper[5110]: I0126 00:11:53.325406 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:53 crc kubenswrapper[5110]: E0126 00:11:53.325807 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:53.825790062 +0000 UTC m=+171.054688671 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:53 crc kubenswrapper[5110]: I0126 00:11:53.426381 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:53 crc kubenswrapper[5110]: E0126 00:11:53.426694 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:53.926664749 +0000 UTC m=+171.155563368 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:53 crc kubenswrapper[5110]: I0126 00:11:53.531933 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:53 crc kubenswrapper[5110]: E0126 00:11:53.532401 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:54.032383674 +0000 UTC m=+171.261282283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:53 crc kubenswrapper[5110]: I0126 00:11:53.633465 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:53 crc kubenswrapper[5110]: E0126 00:11:53.634319 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:54.134282681 +0000 UTC m=+171.363181290 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:53 crc kubenswrapper[5110]: I0126 00:11:53.735715 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:53 crc kubenswrapper[5110]: E0126 00:11:53.736191 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:54.236134525 +0000 UTC m=+171.465033134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:53 crc kubenswrapper[5110]: I0126 00:11:53.838253 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:53 crc kubenswrapper[5110]: E0126 00:11:53.841957 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:54.341910194 +0000 UTC m=+171.570808823 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:53 crc kubenswrapper[5110]: I0126 00:11:53.843088 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:53 crc kubenswrapper[5110]: E0126 00:11:53.846058 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:54.346036883 +0000 UTC m=+171.574935492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:53 crc kubenswrapper[5110]: I0126 00:11:53.944732 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:53 crc kubenswrapper[5110]: E0126 00:11:53.945277 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:54.445245771 +0000 UTC m=+171.674144370 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:54 crc kubenswrapper[5110]: I0126 00:11:54.070229 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:54 crc kubenswrapper[5110]: E0126 00:11:54.070818 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:54.570776481 +0000 UTC m=+171.799675090 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:54 crc kubenswrapper[5110]: I0126 00:11:54.172510 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:54 crc kubenswrapper[5110]: E0126 00:11:54.172824 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:54.67278304 +0000 UTC m=+171.901681649 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:54 crc kubenswrapper[5110]: E0126 00:11:54.265940 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b142be6234ffd8caeb6021967a30cb598049539e85a0c7451710516ae5ff031a" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:11:54 crc kubenswrapper[5110]: E0126 00:11:54.267686 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b142be6234ffd8caeb6021967a30cb598049539e85a0c7451710516ae5ff031a" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:11:54 crc kubenswrapper[5110]: E0126 00:11:54.269081 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b142be6234ffd8caeb6021967a30cb598049539e85a0c7451710516ae5ff031a" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:11:54 crc kubenswrapper[5110]: E0126 00:11:54.269130 5110 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-hvgtl" podUID="8c8a6437-b0cb-4825-999f-6e523fd394e9" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 26 00:11:54 crc kubenswrapper[5110]: I0126 00:11:54.305352 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:54 crc kubenswrapper[5110]: E0126 00:11:54.305751 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:54.805737594 +0000 UTC m=+172.034636203 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:54 crc kubenswrapper[5110]: I0126 00:11:54.407068 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:54 crc kubenswrapper[5110]: E0126 00:11:54.408395 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:54.908366842 +0000 UTC m=+172.137265451 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:54 crc kubenswrapper[5110]: I0126 00:11:54.520093 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:54 crc kubenswrapper[5110]: E0126 00:11:54.520541 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:55.020523564 +0000 UTC m=+172.249422173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:54 crc kubenswrapper[5110]: I0126 00:11:54.636161 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:54 crc kubenswrapper[5110]: E0126 00:11:54.636459 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:55.136414835 +0000 UTC m=+172.365313444 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:54 crc kubenswrapper[5110]: I0126 00:11:54.768202 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:54 crc kubenswrapper[5110]: E0126 00:11:54.768765 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:55.268743251 +0000 UTC m=+172.497641880 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:54 crc kubenswrapper[5110]: I0126 00:11:54.871759 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:54 crc kubenswrapper[5110]: E0126 00:11:54.872062 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:55.372042058 +0000 UTC m=+172.600940667 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:54 crc kubenswrapper[5110]: I0126 00:11:54.889922 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pzlvv" event={"ID":"4464550b-e8e5-4f07-8e54-40eb792eb201","Type":"ContainerStarted","Data":"f0a359535aa92c06321925270774a5b99d7974d23e95fbb09b0d1f7811aff1d0"} Jan 26 00:11:54 crc kubenswrapper[5110]: I0126 00:11:54.900244 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"919cb2e5-eacd-4b72-b8e7-e5e85e582f40","Type":"ContainerStarted","Data":"a121f3abfc28d2b4ec3675848a844873177b927b3b84ba8bddcb760aa1812e44"} Jan 26 00:11:54 crc kubenswrapper[5110]: I0126 00:11:54.965296 5110 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 26 00:11:54 crc kubenswrapper[5110]: I0126 00:11:54.988260 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:54 crc kubenswrapper[5110]: E0126 00:11:54.988659 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:55.488645459 +0000 UTC m=+172.717544068 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:55 crc kubenswrapper[5110]: I0126 00:11:55.052529 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-crc" podStartSLOduration=9.052512496 podStartE2EDuration="9.052512496s" podCreationTimestamp="2026-01-26 00:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:55.050299682 +0000 UTC m=+172.279198301" watchObservedRunningTime="2026-01-26 00:11:55.052512496 +0000 UTC m=+172.281411105" Jan 26 00:11:55 crc kubenswrapper[5110]: I0126 00:11:55.089659 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:55 crc kubenswrapper[5110]: E0126 00:11:55.089785 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:55.589762513 +0000 UTC m=+172.818661122 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:55 crc kubenswrapper[5110]: I0126 00:11:55.089861 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:55 crc kubenswrapper[5110]: E0126 00:11:55.090211 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:55.590203576 +0000 UTC m=+172.819102185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:55 crc kubenswrapper[5110]: I0126 00:11:55.173788 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-vl2cl" Jan 26 00:11:55 crc kubenswrapper[5110]: I0126 00:11:55.191738 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:55 crc kubenswrapper[5110]: E0126 00:11:55.192108 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:55.692089331 +0000 UTC m=+172.920987940 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:55 crc kubenswrapper[5110]: I0126 00:11:55.293887 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:55 crc kubenswrapper[5110]: E0126 00:11:55.294731 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:55.794711929 +0000 UTC m=+173.023610538 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:55 crc kubenswrapper[5110]: I0126 00:11:55.395215 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:55 crc kubenswrapper[5110]: E0126 00:11:55.395508 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:55.895487452 +0000 UTC m=+173.124386061 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:55 crc kubenswrapper[5110]: I0126 00:11:55.508211 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:55 crc kubenswrapper[5110]: E0126 00:11:55.508649 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:56.008630153 +0000 UTC m=+173.237528752 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:55 crc kubenswrapper[5110]: I0126 00:11:55.654114 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:55 crc kubenswrapper[5110]: E0126 00:11:55.654378 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:56.154327976 +0000 UTC m=+173.383226625 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:55 crc kubenswrapper[5110]: I0126 00:11:55.654925 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:55 crc kubenswrapper[5110]: E0126 00:11:55.655529 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:56.15550296 +0000 UTC m=+173.384401579 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-brrzw" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:55 crc kubenswrapper[5110]: I0126 00:11:55.774191 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:55 crc kubenswrapper[5110]: E0126 00:11:55.774560 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:56.274527631 +0000 UTC m=+173.503426240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:55 crc kubenswrapper[5110]: I0126 00:11:55.788281 5110 patch_prober.go:28] interesting pod/downloads-747b44746d-77449 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 26 00:11:55 crc kubenswrapper[5110]: I0126 00:11:55.788774 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-77449" podUID="2438617c-3b33-4b33-971c-afaab481cfe6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 26 00:11:55 crc kubenswrapper[5110]: I0126 00:11:55.788852 5110 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-747b44746d-77449" Jan 26 00:11:55 crc kubenswrapper[5110]: I0126 00:11:55.789400 5110 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"220cf5974f7df703eac4064ef3d7f6ca6b6beacc0853208814acf86c2f43157b"} pod="openshift-console/downloads-747b44746d-77449" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 26 00:11:55 crc kubenswrapper[5110]: I0126 00:11:55.789455 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-console/downloads-747b44746d-77449" podUID="2438617c-3b33-4b33-971c-afaab481cfe6" containerName="download-server" containerID="cri-o://220cf5974f7df703eac4064ef3d7f6ca6b6beacc0853208814acf86c2f43157b" gracePeriod=2 Jan 26 00:11:55 crc kubenswrapper[5110]: I0126 00:11:55.789698 5110 patch_prober.go:28] interesting pod/downloads-747b44746d-77449 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 26 00:11:55 crc kubenswrapper[5110]: I0126 00:11:55.789776 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-77449" podUID="2438617c-3b33-4b33-971c-afaab481cfe6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 26 00:11:55 crc kubenswrapper[5110]: I0126 00:11:55.792900 5110 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-26T00:11:54.965334285Z","UUID":"a9e12041-33b4-44a2-9e92-19a1022fd099","Handler":null,"Name":"","Endpoint":""} Jan 26 00:11:55 crc kubenswrapper[5110]: I0126 00:11:55.832976 5110 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 26 00:11:55 crc kubenswrapper[5110]: I0126 00:11:55.833043 5110 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 26 00:11:55 crc kubenswrapper[5110]: I0126 00:11:55.875467 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:55 crc kubenswrapper[5110]: I0126 00:11:55.919468 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pzlvv" event={"ID":"4464550b-e8e5-4f07-8e54-40eb792eb201","Type":"ContainerStarted","Data":"bd84db64201e70d38898970c3d1d621040df60669de81a008f29a7bcc57fcca7"} Jan 26 00:11:55 crc kubenswrapper[5110]: I0126 00:11:55.954964 5110 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 00:11:55 crc kubenswrapper[5110]: I0126 00:11:55.955034 5110 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:56 crc kubenswrapper[5110]: I0126 00:11:56.341216 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-brrzw\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:56 crc kubenswrapper[5110]: I0126 00:11:56.351282 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:56 crc kubenswrapper[5110]: I0126 00:11:56.396094 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:56 crc kubenswrapper[5110]: I0126 00:11:56.403388 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 26 00:11:56 crc kubenswrapper[5110]: I0126 00:11:56.712524 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-brrzw"] Jan 26 00:11:56 crc kubenswrapper[5110]: W0126 00:11:56.738074 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1b4fb8f_576d_4a52_90c3_1b6db6a4170b.slice/crio-a0a12521b814d07449d491762980d977e1fb1c84d0a0db9c58f5a8bb6c2bed82 WatchSource:0}: Error finding container a0a12521b814d07449d491762980d977e1fb1c84d0a0db9c58f5a8bb6c2bed82: Status 404 returned error can't find the container with id a0a12521b814d07449d491762980d977e1fb1c84d0a0db9c58f5a8bb6c2bed82 Jan 26 00:11:56 crc kubenswrapper[5110]: I0126 00:11:56.927752 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-brrzw" event={"ID":"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b","Type":"ContainerStarted","Data":"a0a12521b814d07449d491762980d977e1fb1c84d0a0db9c58f5a8bb6c2bed82"} Jan 26 00:11:56 crc kubenswrapper[5110]: I0126 00:11:56.931421 5110 generic.go:358] "Generic (PLEG): container finished" podID="2438617c-3b33-4b33-971c-afaab481cfe6" containerID="220cf5974f7df703eac4064ef3d7f6ca6b6beacc0853208814acf86c2f43157b" exitCode=0 Jan 26 00:11:56 crc kubenswrapper[5110]: I0126 00:11:56.931518 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-77449" event={"ID":"2438617c-3b33-4b33-971c-afaab481cfe6","Type":"ContainerDied","Data":"220cf5974f7df703eac4064ef3d7f6ca6b6beacc0853208814acf86c2f43157b"} Jan 26 00:11:56 crc kubenswrapper[5110]: I0126 00:11:56.931591 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-77449" event={"ID":"2438617c-3b33-4b33-971c-afaab481cfe6","Type":"ContainerStarted","Data":"6ac60b3c9e457b25b3cdbfcc8e1cbdcc237153bf4a32da5bdfb725522f908b5d"} Jan 26 00:11:56 crc kubenswrapper[5110]: I0126 00:11:56.932082 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-77449" Jan 26 00:11:56 crc kubenswrapper[5110]: I0126 00:11:56.932737 5110 patch_prober.go:28] interesting pod/downloads-747b44746d-77449 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 26 00:11:56 crc kubenswrapper[5110]: I0126 00:11:56.932875 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-77449" podUID="2438617c-3b33-4b33-971c-afaab481cfe6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 26 00:11:57 crc kubenswrapper[5110]: I0126 00:11:57.536404 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Jan 26 00:11:57 crc kubenswrapper[5110]: I0126 00:11:57.602479 5110 ???:1] "http: TLS handshake error from 192.168.126.11:39644: no serving certificate available for the kubelet" Jan 26 00:11:57 crc kubenswrapper[5110]: I0126 00:11:57.974532 5110 generic.go:358] "Generic (PLEG): container finished" podID="919cb2e5-eacd-4b72-b8e7-e5e85e582f40" containerID="a121f3abfc28d2b4ec3675848a844873177b927b3b84ba8bddcb760aa1812e44" exitCode=0 Jan 26 00:11:57 crc kubenswrapper[5110]: I0126 00:11:57.975566 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"919cb2e5-eacd-4b72-b8e7-e5e85e582f40","Type":"ContainerDied","Data":"a121f3abfc28d2b4ec3675848a844873177b927b3b84ba8bddcb760aa1812e44"} Jan 26 00:11:57 crc kubenswrapper[5110]: I0126 00:11:57.989121 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pzlvv" event={"ID":"4464550b-e8e5-4f07-8e54-40eb792eb201","Type":"ContainerStarted","Data":"6f4f26ee7a45fb50bfd5038ee62c2fb0dee9fec8d7c3734dbdd66e8cbadec94a"} Jan 26 00:11:57 crc kubenswrapper[5110]: I0126 00:11:57.999506 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-brrzw" event={"ID":"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b","Type":"ContainerStarted","Data":"e72785daf5583e885437fbffb06256814d325302c070a880645c97270d88c27d"} Jan 26 00:11:57 crc kubenswrapper[5110]: I0126 00:11:57.999824 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:11:58 crc kubenswrapper[5110]: I0126 00:11:58.000548 5110 patch_prober.go:28] interesting pod/downloads-747b44746d-77449 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 26 00:11:58 crc kubenswrapper[5110]: I0126 00:11:58.000698 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-77449" podUID="2438617c-3b33-4b33-971c-afaab481cfe6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 26 00:11:58 crc kubenswrapper[5110]: I0126 00:11:58.033228 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-pzlvv" podStartSLOduration=36.033206415 podStartE2EDuration="36.033206415s" podCreationTimestamp="2026-01-26 00:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:58.024573285 +0000 UTC m=+175.253471894" watchObservedRunningTime="2026-01-26 00:11:58.033206415 +0000 UTC m=+175.262105014" Jan 26 00:11:58 crc kubenswrapper[5110]: I0126 00:11:58.055459 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-brrzw" podStartSLOduration=155.055434768 podStartE2EDuration="2m35.055434768s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:58.053661606 +0000 UTC m=+175.282560245" watchObservedRunningTime="2026-01-26 00:11:58.055434768 +0000 UTC m=+175.284333377" Jan 26 00:12:03 crc kubenswrapper[5110]: I0126 00:12:03.309157 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-tjq6l" Jan 26 00:12:03 crc kubenswrapper[5110]: I0126 00:12:03.316262 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-tjq6l" Jan 26 00:12:04 crc kubenswrapper[5110]: E0126 00:12:04.266025 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b142be6234ffd8caeb6021967a30cb598049539e85a0c7451710516ae5ff031a" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:04 crc kubenswrapper[5110]: E0126 00:12:04.267729 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b142be6234ffd8caeb6021967a30cb598049539e85a0c7451710516ae5ff031a" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:04 crc kubenswrapper[5110]: E0126 00:12:04.281664 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b142be6234ffd8caeb6021967a30cb598049539e85a0c7451710516ae5ff031a" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:04 crc kubenswrapper[5110]: E0126 00:12:04.281773 5110 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-hvgtl" podUID="8c8a6437-b0cb-4825-999f-6e523fd394e9" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 26 00:12:05 crc kubenswrapper[5110]: I0126 00:12:05.789178 5110 patch_prober.go:28] interesting pod/downloads-747b44746d-77449 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 26 00:12:05 crc kubenswrapper[5110]: I0126 00:12:05.789349 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-77449" podUID="2438617c-3b33-4b33-971c-afaab481cfe6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 26 00:12:06 crc kubenswrapper[5110]: I0126 00:12:06.342243 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"919cb2e5-eacd-4b72-b8e7-e5e85e582f40","Type":"ContainerDied","Data":"eb6f760e886a7c9b9b3506f1879a23123a056a1431ab3a9e4e6d7e59bf6856d7"} Jan 26 00:12:06 crc kubenswrapper[5110]: I0126 00:12:06.342305 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb6f760e886a7c9b9b3506f1879a23123a056a1431ab3a9e4e6d7e59bf6856d7" Jan 26 00:12:06 crc kubenswrapper[5110]: I0126 00:12:06.344178 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 26 00:12:06 crc kubenswrapper[5110]: I0126 00:12:06.455166 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/919cb2e5-eacd-4b72-b8e7-e5e85e582f40-kubelet-dir\") pod \"919cb2e5-eacd-4b72-b8e7-e5e85e582f40\" (UID: \"919cb2e5-eacd-4b72-b8e7-e5e85e582f40\") " Jan 26 00:12:06 crc kubenswrapper[5110]: I0126 00:12:06.455679 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/919cb2e5-eacd-4b72-b8e7-e5e85e582f40-kube-api-access\") pod \"919cb2e5-eacd-4b72-b8e7-e5e85e582f40\" (UID: \"919cb2e5-eacd-4b72-b8e7-e5e85e582f40\") " Jan 26 00:12:06 crc kubenswrapper[5110]: I0126 00:12:06.455334 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/919cb2e5-eacd-4b72-b8e7-e5e85e582f40-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "919cb2e5-eacd-4b72-b8e7-e5e85e582f40" (UID: "919cb2e5-eacd-4b72-b8e7-e5e85e582f40"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:12:06 crc kubenswrapper[5110]: I0126 00:12:06.557015 5110 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/919cb2e5-eacd-4b72-b8e7-e5e85e582f40-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:06 crc kubenswrapper[5110]: I0126 00:12:06.571751 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/919cb2e5-eacd-4b72-b8e7-e5e85e582f40-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "919cb2e5-eacd-4b72-b8e7-e5e85e582f40" (UID: "919cb2e5-eacd-4b72-b8e7-e5e85e582f40"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:12:06 crc kubenswrapper[5110]: I0126 00:12:06.657908 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/919cb2e5-eacd-4b72-b8e7-e5e85e582f40-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:07 crc kubenswrapper[5110]: I0126 00:12:07.355230 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 26 00:12:08 crc kubenswrapper[5110]: I0126 00:12:08.030633 5110 patch_prober.go:28] interesting pod/downloads-747b44746d-77449 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 26 00:12:08 crc kubenswrapper[5110]: I0126 00:12:08.031189 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-77449" podUID="2438617c-3b33-4b33-971c-afaab481cfe6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 26 00:12:08 crc kubenswrapper[5110]: I0126 00:12:08.364105 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-hvgtl_8c8a6437-b0cb-4825-999f-6e523fd394e9/kube-multus-additional-cni-plugins/0.log" Jan 26 00:12:08 crc kubenswrapper[5110]: I0126 00:12:08.364249 5110 generic.go:358] "Generic (PLEG): container finished" podID="8c8a6437-b0cb-4825-999f-6e523fd394e9" containerID="b142be6234ffd8caeb6021967a30cb598049539e85a0c7451710516ae5ff031a" exitCode=137 Jan 26 00:12:08 crc kubenswrapper[5110]: I0126 00:12:08.364593 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-hvgtl" event={"ID":"8c8a6437-b0cb-4825-999f-6e523fd394e9","Type":"ContainerDied","Data":"b142be6234ffd8caeb6021967a30cb598049539e85a0c7451710516ae5ff031a"} Jan 26 00:12:10 crc kubenswrapper[5110]: I0126 00:12:10.515451 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-h4rkf" Jan 26 00:12:14 crc kubenswrapper[5110]: E0126 00:12:14.264988 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b142be6234ffd8caeb6021967a30cb598049539e85a0c7451710516ae5ff031a is running failed: container process not found" containerID="b142be6234ffd8caeb6021967a30cb598049539e85a0c7451710516ae5ff031a" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:14 crc kubenswrapper[5110]: E0126 00:12:14.265910 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b142be6234ffd8caeb6021967a30cb598049539e85a0c7451710516ae5ff031a is running failed: container process not found" containerID="b142be6234ffd8caeb6021967a30cb598049539e85a0c7451710516ae5ff031a" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:14 crc kubenswrapper[5110]: E0126 00:12:14.266415 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b142be6234ffd8caeb6021967a30cb598049539e85a0c7451710516ae5ff031a is running failed: container process not found" containerID="b142be6234ffd8caeb6021967a30cb598049539e85a0c7451710516ae5ff031a" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:14 crc kubenswrapper[5110]: E0126 00:12:14.266484 5110 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b142be6234ffd8caeb6021967a30cb598049539e85a0c7451710516ae5ff031a is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-hvgtl" podUID="8c8a6437-b0cb-4825-999f-6e523fd394e9" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 26 00:12:14 crc kubenswrapper[5110]: I0126 00:12:14.534541 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:12:15 crc kubenswrapper[5110]: I0126 00:12:15.790829 5110 patch_prober.go:28] interesting pod/downloads-747b44746d-77449 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 26 00:12:15 crc kubenswrapper[5110]: I0126 00:12:15.791367 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-77449" podUID="2438617c-3b33-4b33-971c-afaab481cfe6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 26 00:12:17 crc kubenswrapper[5110]: I0126 00:12:17.995671 5110 patch_prober.go:28] interesting pod/downloads-747b44746d-77449 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 26 00:12:17 crc kubenswrapper[5110]: I0126 00:12:17.995762 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-77449" podUID="2438617c-3b33-4b33-971c-afaab481cfe6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 26 00:12:18 crc kubenswrapper[5110]: I0126 00:12:18.118170 5110 ???:1] "http: TLS handshake error from 192.168.126.11:35326: no serving certificate available for the kubelet" Jan 26 00:12:19 crc kubenswrapper[5110]: I0126 00:12:19.024176 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:12:24 crc kubenswrapper[5110]: E0126 00:12:24.263845 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b142be6234ffd8caeb6021967a30cb598049539e85a0c7451710516ae5ff031a is running failed: container process not found" containerID="b142be6234ffd8caeb6021967a30cb598049539e85a0c7451710516ae5ff031a" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:24 crc kubenswrapper[5110]: E0126 00:12:24.265182 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b142be6234ffd8caeb6021967a30cb598049539e85a0c7451710516ae5ff031a is running failed: container process not found" containerID="b142be6234ffd8caeb6021967a30cb598049539e85a0c7451710516ae5ff031a" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:24 crc kubenswrapper[5110]: E0126 00:12:24.266009 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b142be6234ffd8caeb6021967a30cb598049539e85a0c7451710516ae5ff031a is running failed: container process not found" containerID="b142be6234ffd8caeb6021967a30cb598049539e85a0c7451710516ae5ff031a" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:24 crc kubenswrapper[5110]: E0126 00:12:24.266079 5110 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b142be6234ffd8caeb6021967a30cb598049539e85a0c7451710516ae5ff031a is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-hvgtl" podUID="8c8a6437-b0cb-4825-999f-6e523fd394e9" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 26 00:12:25 crc kubenswrapper[5110]: I0126 00:12:25.082689 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 26 00:12:25 crc kubenswrapper[5110]: I0126 00:12:25.084074 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="772c8d38-81bf-4f54-b995-e740d2056ead" containerName="pruner" Jan 26 00:12:25 crc kubenswrapper[5110]: I0126 00:12:25.084096 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="772c8d38-81bf-4f54-b995-e740d2056ead" containerName="pruner" Jan 26 00:12:25 crc kubenswrapper[5110]: I0126 00:12:25.084108 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="919cb2e5-eacd-4b72-b8e7-e5e85e582f40" containerName="pruner" Jan 26 00:12:25 crc kubenswrapper[5110]: I0126 00:12:25.084116 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="919cb2e5-eacd-4b72-b8e7-e5e85e582f40" containerName="pruner" Jan 26 00:12:25 crc kubenswrapper[5110]: I0126 00:12:25.084255 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="772c8d38-81bf-4f54-b995-e740d2056ead" containerName="pruner" Jan 26 00:12:25 crc kubenswrapper[5110]: I0126 00:12:25.084268 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="919cb2e5-eacd-4b72-b8e7-e5e85e582f40" containerName="pruner" Jan 26 00:12:25 crc kubenswrapper[5110]: I0126 00:12:25.787780 5110 patch_prober.go:28] interesting pod/downloads-747b44746d-77449 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 26 00:12:25 crc kubenswrapper[5110]: I0126 00:12:25.787910 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-77449" podUID="2438617c-3b33-4b33-971c-afaab481cfe6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 26 00:12:25 crc kubenswrapper[5110]: I0126 00:12:25.986702 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 26 00:12:25 crc kubenswrapper[5110]: I0126 00:12:25.991520 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 26 00:12:25 crc kubenswrapper[5110]: I0126 00:12:25.992199 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 26 00:12:26 crc kubenswrapper[5110]: I0126 00:12:26.013473 5110 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-747b44746d-77449" Jan 26 00:12:26 crc kubenswrapper[5110]: I0126 00:12:26.013508 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 26 00:12:26 crc kubenswrapper[5110]: I0126 00:12:26.014207 5110 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"6ac60b3c9e457b25b3cdbfcc8e1cbdcc237153bf4a32da5bdfb725522f908b5d"} pod="openshift-console/downloads-747b44746d-77449" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 26 00:12:26 crc kubenswrapper[5110]: I0126 00:12:26.014281 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-console/downloads-747b44746d-77449" podUID="2438617c-3b33-4b33-971c-afaab481cfe6" containerName="download-server" containerID="cri-o://6ac60b3c9e457b25b3cdbfcc8e1cbdcc237153bf4a32da5bdfb725522f908b5d" gracePeriod=2 Jan 26 00:12:26 crc kubenswrapper[5110]: I0126 00:12:26.014418 5110 patch_prober.go:28] interesting pod/downloads-747b44746d-77449 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 26 00:12:26 crc kubenswrapper[5110]: I0126 00:12:26.014469 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-77449" podUID="2438617c-3b33-4b33-971c-afaab481cfe6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 26 00:12:26 crc kubenswrapper[5110]: I0126 00:12:26.083667 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03706aeb-7193-414e-b54e-e860c08dd10f-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"03706aeb-7193-414e-b54e-e860c08dd10f\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 26 00:12:26 crc kubenswrapper[5110]: I0126 00:12:26.084222 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03706aeb-7193-414e-b54e-e860c08dd10f-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"03706aeb-7193-414e-b54e-e860c08dd10f\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 26 00:12:26 crc kubenswrapper[5110]: I0126 00:12:26.185894 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03706aeb-7193-414e-b54e-e860c08dd10f-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"03706aeb-7193-414e-b54e-e860c08dd10f\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 26 00:12:26 crc kubenswrapper[5110]: I0126 00:12:26.185989 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03706aeb-7193-414e-b54e-e860c08dd10f-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"03706aeb-7193-414e-b54e-e860c08dd10f\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 26 00:12:26 crc kubenswrapper[5110]: I0126 00:12:26.186433 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03706aeb-7193-414e-b54e-e860c08dd10f-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"03706aeb-7193-414e-b54e-e860c08dd10f\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 26 00:12:26 crc kubenswrapper[5110]: I0126 00:12:26.210735 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03706aeb-7193-414e-b54e-e860c08dd10f-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"03706aeb-7193-414e-b54e-e860c08dd10f\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 26 00:12:26 crc kubenswrapper[5110]: I0126 00:12:26.317181 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 26 00:12:27 crc kubenswrapper[5110]: I0126 00:12:27.580627 5110 generic.go:358] "Generic (PLEG): container finished" podID="2438617c-3b33-4b33-971c-afaab481cfe6" containerID="6ac60b3c9e457b25b3cdbfcc8e1cbdcc237153bf4a32da5bdfb725522f908b5d" exitCode=0 Jan 26 00:12:27 crc kubenswrapper[5110]: I0126 00:12:27.580689 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-77449" event={"ID":"2438617c-3b33-4b33-971c-afaab481cfe6","Type":"ContainerDied","Data":"6ac60b3c9e457b25b3cdbfcc8e1cbdcc237153bf4a32da5bdfb725522f908b5d"} Jan 26 00:12:27 crc kubenswrapper[5110]: I0126 00:12:27.581095 5110 scope.go:117] "RemoveContainer" containerID="220cf5974f7df703eac4064ef3d7f6ca6b6beacc0853208814acf86c2f43157b" Jan 26 00:12:30 crc kubenswrapper[5110]: I0126 00:12:30.679322 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 26 00:12:30 crc kubenswrapper[5110]: I0126 00:12:30.749861 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 26 00:12:30 crc kubenswrapper[5110]: I0126 00:12:30.750105 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:12:30 crc kubenswrapper[5110]: I0126 00:12:30.883386 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c1dd88ef-db06-41bc-8be3-53730c6fa57f-var-lock\") pod \"installer-12-crc\" (UID: \"c1dd88ef-db06-41bc-8be3-53730c6fa57f\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:12:30 crc kubenswrapper[5110]: I0126 00:12:30.883474 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c1dd88ef-db06-41bc-8be3-53730c6fa57f-kubelet-dir\") pod \"installer-12-crc\" (UID: \"c1dd88ef-db06-41bc-8be3-53730c6fa57f\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:12:30 crc kubenswrapper[5110]: I0126 00:12:30.883652 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1dd88ef-db06-41bc-8be3-53730c6fa57f-kube-api-access\") pod \"installer-12-crc\" (UID: \"c1dd88ef-db06-41bc-8be3-53730c6fa57f\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:12:30 crc kubenswrapper[5110]: I0126 00:12:30.985405 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c1dd88ef-db06-41bc-8be3-53730c6fa57f-kubelet-dir\") pod \"installer-12-crc\" (UID: \"c1dd88ef-db06-41bc-8be3-53730c6fa57f\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:12:30 crc kubenswrapper[5110]: I0126 00:12:30.985528 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c1dd88ef-db06-41bc-8be3-53730c6fa57f-kubelet-dir\") pod \"installer-12-crc\" (UID: \"c1dd88ef-db06-41bc-8be3-53730c6fa57f\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:12:30 crc kubenswrapper[5110]: I0126 00:12:30.985776 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1dd88ef-db06-41bc-8be3-53730c6fa57f-kube-api-access\") pod \"installer-12-crc\" (UID: \"c1dd88ef-db06-41bc-8be3-53730c6fa57f\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:12:30 crc kubenswrapper[5110]: I0126 00:12:30.985871 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c1dd88ef-db06-41bc-8be3-53730c6fa57f-var-lock\") pod \"installer-12-crc\" (UID: \"c1dd88ef-db06-41bc-8be3-53730c6fa57f\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:12:30 crc kubenswrapper[5110]: I0126 00:12:30.985974 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c1dd88ef-db06-41bc-8be3-53730c6fa57f-var-lock\") pod \"installer-12-crc\" (UID: \"c1dd88ef-db06-41bc-8be3-53730c6fa57f\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:12:31 crc kubenswrapper[5110]: I0126 00:12:31.015885 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1dd88ef-db06-41bc-8be3-53730c6fa57f-kube-api-access\") pod \"installer-12-crc\" (UID: \"c1dd88ef-db06-41bc-8be3-53730c6fa57f\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:12:31 crc kubenswrapper[5110]: I0126 00:12:31.084718 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:12:34 crc kubenswrapper[5110]: E0126 00:12:34.263636 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b142be6234ffd8caeb6021967a30cb598049539e85a0c7451710516ae5ff031a is running failed: container process not found" containerID="b142be6234ffd8caeb6021967a30cb598049539e85a0c7451710516ae5ff031a" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:34 crc kubenswrapper[5110]: E0126 00:12:34.264730 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b142be6234ffd8caeb6021967a30cb598049539e85a0c7451710516ae5ff031a is running failed: container process not found" containerID="b142be6234ffd8caeb6021967a30cb598049539e85a0c7451710516ae5ff031a" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:34 crc kubenswrapper[5110]: E0126 00:12:34.265027 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b142be6234ffd8caeb6021967a30cb598049539e85a0c7451710516ae5ff031a is running failed: container process not found" containerID="b142be6234ffd8caeb6021967a30cb598049539e85a0c7451710516ae5ff031a" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:34 crc kubenswrapper[5110]: E0126 00:12:34.265054 5110 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b142be6234ffd8caeb6021967a30cb598049539e85a0c7451710516ae5ff031a is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-hvgtl" podUID="8c8a6437-b0cb-4825-999f-6e523fd394e9" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 26 00:12:36 crc kubenswrapper[5110]: I0126 00:12:36.014988 5110 patch_prober.go:28] interesting pod/downloads-747b44746d-77449 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 26 00:12:36 crc kubenswrapper[5110]: I0126 00:12:36.015463 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-77449" podUID="2438617c-3b33-4b33-971c-afaab481cfe6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 26 00:12:38 crc kubenswrapper[5110]: I0126 00:12:38.305065 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-hvgtl_8c8a6437-b0cb-4825-999f-6e523fd394e9/kube-multus-additional-cni-plugins/0.log" Jan 26 00:12:38 crc kubenswrapper[5110]: I0126 00:12:38.305745 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-hvgtl" Jan 26 00:12:38 crc kubenswrapper[5110]: I0126 00:12:38.430033 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/8c8a6437-b0cb-4825-999f-6e523fd394e9-ready\") pod \"8c8a6437-b0cb-4825-999f-6e523fd394e9\" (UID: \"8c8a6437-b0cb-4825-999f-6e523fd394e9\") " Jan 26 00:12:38 crc kubenswrapper[5110]: I0126 00:12:38.430293 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6kl9\" (UniqueName: \"kubernetes.io/projected/8c8a6437-b0cb-4825-999f-6e523fd394e9-kube-api-access-z6kl9\") pod \"8c8a6437-b0cb-4825-999f-6e523fd394e9\" (UID: \"8c8a6437-b0cb-4825-999f-6e523fd394e9\") " Jan 26 00:12:38 crc kubenswrapper[5110]: I0126 00:12:38.430338 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8c8a6437-b0cb-4825-999f-6e523fd394e9-cni-sysctl-allowlist\") pod \"8c8a6437-b0cb-4825-999f-6e523fd394e9\" (UID: \"8c8a6437-b0cb-4825-999f-6e523fd394e9\") " Jan 26 00:12:38 crc kubenswrapper[5110]: I0126 00:12:38.430357 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8c8a6437-b0cb-4825-999f-6e523fd394e9-tuning-conf-dir\") pod \"8c8a6437-b0cb-4825-999f-6e523fd394e9\" (UID: \"8c8a6437-b0cb-4825-999f-6e523fd394e9\") " Jan 26 00:12:38 crc kubenswrapper[5110]: I0126 00:12:38.430579 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c8a6437-b0cb-4825-999f-6e523fd394e9-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "8c8a6437-b0cb-4825-999f-6e523fd394e9" (UID: "8c8a6437-b0cb-4825-999f-6e523fd394e9"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:12:38 crc kubenswrapper[5110]: I0126 00:12:38.430701 5110 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8c8a6437-b0cb-4825-999f-6e523fd394e9-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:38 crc kubenswrapper[5110]: I0126 00:12:38.431332 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c8a6437-b0cb-4825-999f-6e523fd394e9-ready" (OuterVolumeSpecName: "ready") pod "8c8a6437-b0cb-4825-999f-6e523fd394e9" (UID: "8c8a6437-b0cb-4825-999f-6e523fd394e9"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:12:38 crc kubenswrapper[5110]: I0126 00:12:38.431674 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c8a6437-b0cb-4825-999f-6e523fd394e9-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "8c8a6437-b0cb-4825-999f-6e523fd394e9" (UID: "8c8a6437-b0cb-4825-999f-6e523fd394e9"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:12:38 crc kubenswrapper[5110]: I0126 00:12:38.443574 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c8a6437-b0cb-4825-999f-6e523fd394e9-kube-api-access-z6kl9" (OuterVolumeSpecName: "kube-api-access-z6kl9") pod "8c8a6437-b0cb-4825-999f-6e523fd394e9" (UID: "8c8a6437-b0cb-4825-999f-6e523fd394e9"). InnerVolumeSpecName "kube-api-access-z6kl9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:12:38 crc kubenswrapper[5110]: I0126 00:12:38.532236 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z6kl9\" (UniqueName: \"kubernetes.io/projected/8c8a6437-b0cb-4825-999f-6e523fd394e9-kube-api-access-z6kl9\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:38 crc kubenswrapper[5110]: I0126 00:12:38.532287 5110 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8c8a6437-b0cb-4825-999f-6e523fd394e9-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:38 crc kubenswrapper[5110]: I0126 00:12:38.532300 5110 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/8c8a6437-b0cb-4825-999f-6e523fd394e9-ready\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:38 crc kubenswrapper[5110]: I0126 00:12:38.710074 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-hvgtl_8c8a6437-b0cb-4825-999f-6e523fd394e9/kube-multus-additional-cni-plugins/0.log" Jan 26 00:12:38 crc kubenswrapper[5110]: I0126 00:12:38.710833 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-hvgtl" event={"ID":"8c8a6437-b0cb-4825-999f-6e523fd394e9","Type":"ContainerDied","Data":"312daff4e7a4a9436a5b5a95add3d9d9836fad29a40674854b5016b9d6432c8d"} Jan 26 00:12:38 crc kubenswrapper[5110]: I0126 00:12:38.710903 5110 scope.go:117] "RemoveContainer" containerID="b142be6234ffd8caeb6021967a30cb598049539e85a0c7451710516ae5ff031a" Jan 26 00:12:38 crc kubenswrapper[5110]: I0126 00:12:38.711092 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-hvgtl" Jan 26 00:12:38 crc kubenswrapper[5110]: I0126 00:12:38.758041 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-hvgtl"] Jan 26 00:12:38 crc kubenswrapper[5110]: I0126 00:12:38.761979 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-hvgtl"] Jan 26 00:12:39 crc kubenswrapper[5110]: I0126 00:12:39.040305 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 26 00:12:39 crc kubenswrapper[5110]: W0126 00:12:39.069093 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod03706aeb_7193_414e_b54e_e860c08dd10f.slice/crio-e78430b01c3be9daf617466e278dcecc50fba93496091edec460922b7a7b8c30 WatchSource:0}: Error finding container e78430b01c3be9daf617466e278dcecc50fba93496091edec460922b7a7b8c30: Status 404 returned error can't find the container with id e78430b01c3be9daf617466e278dcecc50fba93496091edec460922b7a7b8c30 Jan 26 00:12:39 crc kubenswrapper[5110]: I0126 00:12:39.147323 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 26 00:12:39 crc kubenswrapper[5110]: I0126 00:12:39.345486 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c8a6437-b0cb-4825-999f-6e523fd394e9" path="/var/lib/kubelet/pods/8c8a6437-b0cb-4825-999f-6e523fd394e9/volumes" Jan 26 00:12:39 crc kubenswrapper[5110]: I0126 00:12:39.734594 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-77449" event={"ID":"2438617c-3b33-4b33-971c-afaab481cfe6","Type":"ContainerStarted","Data":"be97c6b23977751ca3211e11936472cecf0a8fc84c7317beb0f24bea902d50c3"} Jan 26 00:12:39 crc kubenswrapper[5110]: I0126 00:12:39.735136 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-77449" Jan 26 00:12:39 crc kubenswrapper[5110]: I0126 00:12:39.737489 5110 patch_prober.go:28] interesting pod/downloads-747b44746d-77449 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 26 00:12:39 crc kubenswrapper[5110]: I0126 00:12:39.737608 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-77449" podUID="2438617c-3b33-4b33-971c-afaab481cfe6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 26 00:12:39 crc kubenswrapper[5110]: I0126 00:12:39.740468 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k2xlk" event={"ID":"10001c1a-643a-49dd-b5ba-88376896f044","Type":"ContainerStarted","Data":"c66631d19234ba9bc7db616a1c64ff0a80d4daf5c65e2d4506c3d0d298c6265d"} Jan 26 00:12:39 crc kubenswrapper[5110]: I0126 00:12:39.742340 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"03706aeb-7193-414e-b54e-e860c08dd10f","Type":"ContainerStarted","Data":"e78430b01c3be9daf617466e278dcecc50fba93496091edec460922b7a7b8c30"} Jan 26 00:12:39 crc kubenswrapper[5110]: I0126 00:12:39.748463 5110 generic.go:358] "Generic (PLEG): container finished" podID="4276f77a-a21c-47f2-9902-e37c3ab865f5" containerID="656781247ee60f7261dbe34ff63b4f0c54015fdb182ebcaaafd408adf49e9102" exitCode=0 Jan 26 00:12:39 crc kubenswrapper[5110]: I0126 00:12:39.749229 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dlcz" event={"ID":"4276f77a-a21c-47f2-9902-e37c3ab865f5","Type":"ContainerDied","Data":"656781247ee60f7261dbe34ff63b4f0c54015fdb182ebcaaafd408adf49e9102"} Jan 26 00:12:39 crc kubenswrapper[5110]: I0126 00:12:39.751686 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d6ths" event={"ID":"168e3c2a-eb5e-451a-bba9-93e41fb1e958","Type":"ContainerStarted","Data":"d19b993fcac9451820a0f17a7577c0c857300e04e03c0cd2677397baefdb8506"} Jan 26 00:12:39 crc kubenswrapper[5110]: I0126 00:12:39.772379 5110 generic.go:358] "Generic (PLEG): container finished" podID="20cad200-6605-4fac-b28d-b84cd2d74a89" containerID="cd979908d80e7df3b36b6ff6105a2713018b397631ee8687effc84d1e5acf853" exitCode=0 Jan 26 00:12:39 crc kubenswrapper[5110]: I0126 00:12:39.772548 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2w74" event={"ID":"20cad200-6605-4fac-b28d-b84cd2d74a89","Type":"ContainerDied","Data":"cd979908d80e7df3b36b6ff6105a2713018b397631ee8687effc84d1e5acf853"} Jan 26 00:12:39 crc kubenswrapper[5110]: I0126 00:12:39.779681 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsblt" event={"ID":"96073c20-1f42-44fa-a922-6681dab132ef","Type":"ContainerStarted","Data":"356516d35afdbcf54fb2299cda85138053b5920dbc10f04005fc3e22fcfeeff6"} Jan 26 00:12:39 crc kubenswrapper[5110]: I0126 00:12:39.783153 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"c1dd88ef-db06-41bc-8be3-53730c6fa57f","Type":"ContainerStarted","Data":"f2dd2cf60790c8ca6797c6d356b6d64d749e265832151e737964ecb99253c2ae"} Jan 26 00:12:39 crc kubenswrapper[5110]: I0126 00:12:39.788187 5110 generic.go:358] "Generic (PLEG): container finished" podID="24c8aa2b-e367-4f40-92d2-ff31d5cbce2d" containerID="120a6b4dd59f9e2c64a2a6a289e0243c5daf676b52674d10b11d00fbe8bb91c6" exitCode=0 Jan 26 00:12:39 crc kubenswrapper[5110]: I0126 00:12:39.788909 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ltvhd" event={"ID":"24c8aa2b-e367-4f40-92d2-ff31d5cbce2d","Type":"ContainerDied","Data":"120a6b4dd59f9e2c64a2a6a289e0243c5daf676b52674d10b11d00fbe8bb91c6"} Jan 26 00:12:39 crc kubenswrapper[5110]: I0126 00:12:39.806395 5110 generic.go:358] "Generic (PLEG): container finished" podID="063f39d3-4de8-4c26-99fc-3148a8738541" containerID="8242ab45d7f7e5501d6c0e3dce5468384304167e12d1aad8812bdd705b090385" exitCode=0 Jan 26 00:12:39 crc kubenswrapper[5110]: I0126 00:12:39.806558 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5g6gk" event={"ID":"063f39d3-4de8-4c26-99fc-3148a8738541","Type":"ContainerDied","Data":"8242ab45d7f7e5501d6c0e3dce5468384304167e12d1aad8812bdd705b090385"} Jan 26 00:12:39 crc kubenswrapper[5110]: I0126 00:12:39.811493 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ssztd" event={"ID":"4f9f1edc-9805-4975-8e01-a3428d24cc00","Type":"ContainerStarted","Data":"fc275dd5335d17cd8d660301f58c7cef231beee87d7c9838341849315b5fe359"} Jan 26 00:12:40 crc kubenswrapper[5110]: I0126 00:12:40.929579 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ltvhd" event={"ID":"24c8aa2b-e367-4f40-92d2-ff31d5cbce2d","Type":"ContainerStarted","Data":"b9c3164d69f20e45ab0b07e351510d2f51c7ea50ba616474bfceb69b6a4057db"} Jan 26 00:12:40 crc kubenswrapper[5110]: I0126 00:12:40.936206 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5g6gk" event={"ID":"063f39d3-4de8-4c26-99fc-3148a8738541","Type":"ContainerStarted","Data":"612200d4a7f3c5db909d8b17a3551fc7a4bec36c082ea2e609088c07cca8197f"} Jan 26 00:12:40 crc kubenswrapper[5110]: I0126 00:12:40.940144 5110 generic.go:358] "Generic (PLEG): container finished" podID="4f9f1edc-9805-4975-8e01-a3428d24cc00" containerID="fc275dd5335d17cd8d660301f58c7cef231beee87d7c9838341849315b5fe359" exitCode=0 Jan 26 00:12:40 crc kubenswrapper[5110]: I0126 00:12:40.940394 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ssztd" event={"ID":"4f9f1edc-9805-4975-8e01-a3428d24cc00","Type":"ContainerDied","Data":"fc275dd5335d17cd8d660301f58c7cef231beee87d7c9838341849315b5fe359"} Jan 26 00:12:40 crc kubenswrapper[5110]: I0126 00:12:40.969573 5110 generic.go:358] "Generic (PLEG): container finished" podID="10001c1a-643a-49dd-b5ba-88376896f044" containerID="c66631d19234ba9bc7db616a1c64ff0a80d4daf5c65e2d4506c3d0d298c6265d" exitCode=0 Jan 26 00:12:40 crc kubenswrapper[5110]: I0126 00:12:40.969671 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k2xlk" event={"ID":"10001c1a-643a-49dd-b5ba-88376896f044","Type":"ContainerDied","Data":"c66631d19234ba9bc7db616a1c64ff0a80d4daf5c65e2d4506c3d0d298c6265d"} Jan 26 00:12:40 crc kubenswrapper[5110]: I0126 00:12:40.972140 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"03706aeb-7193-414e-b54e-e860c08dd10f","Type":"ContainerStarted","Data":"926dcc552dc9031d03279dccc97bc52f08ed71a8026f3d83021a9e8c411f7b04"} Jan 26 00:12:40 crc kubenswrapper[5110]: I0126 00:12:40.986555 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dlcz" event={"ID":"4276f77a-a21c-47f2-9902-e37c3ab865f5","Type":"ContainerStarted","Data":"68c63d9601cce8965e267e11a19e683789e44bdb30504077e32ea3f516b8f166"} Jan 26 00:12:40 crc kubenswrapper[5110]: I0126 00:12:40.988038 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"c1dd88ef-db06-41bc-8be3-53730c6fa57f","Type":"ContainerStarted","Data":"dba67fdea6a3493e7065b7aefdcceb1064607698419a340817355ce7c4cb6516"} Jan 26 00:12:40 crc kubenswrapper[5110]: I0126 00:12:40.989407 5110 patch_prober.go:28] interesting pod/downloads-747b44746d-77449 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 26 00:12:40 crc kubenswrapper[5110]: I0126 00:12:40.989472 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-77449" podUID="2438617c-3b33-4b33-971c-afaab481cfe6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 26 00:12:41 crc kubenswrapper[5110]: I0126 00:12:41.016671 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ltvhd" podStartSLOduration=14.968561335 podStartE2EDuration="1m2.016655093s" podCreationTimestamp="2026-01-26 00:11:39 +0000 UTC" firstStartedPulling="2026-01-26 00:11:51.325129897 +0000 UTC m=+168.554028506" lastFinishedPulling="2026-01-26 00:12:38.373223655 +0000 UTC m=+215.602122264" observedRunningTime="2026-01-26 00:12:41.014539632 +0000 UTC m=+218.243438241" watchObservedRunningTime="2026-01-26 00:12:41.016655093 +0000 UTC m=+218.245553702" Jan 26 00:12:41 crc kubenswrapper[5110]: I0126 00:12:41.118483 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2dlcz" podStartSLOduration=15.204579162 podStartE2EDuration="1m2.118451899s" podCreationTimestamp="2026-01-26 00:11:39 +0000 UTC" firstStartedPulling="2026-01-26 00:11:51.460182562 +0000 UTC m=+168.689081171" lastFinishedPulling="2026-01-26 00:12:38.374055249 +0000 UTC m=+215.602953908" observedRunningTime="2026-01-26 00:12:41.11223667 +0000 UTC m=+218.341135279" watchObservedRunningTime="2026-01-26 00:12:41.118451899 +0000 UTC m=+218.347350508" Jan 26 00:12:41 crc kubenswrapper[5110]: I0126 00:12:41.339009 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=16.33898705 podStartE2EDuration="16.33898705s" podCreationTimestamp="2026-01-26 00:12:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:12:41.338557438 +0000 UTC m=+218.567456047" watchObservedRunningTime="2026-01-26 00:12:41.33898705 +0000 UTC m=+218.567885659" Jan 26 00:12:41 crc kubenswrapper[5110]: I0126 00:12:41.418733 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5g6gk" podStartSLOduration=14.196442153 podStartE2EDuration="1m1.41869986s" podCreationTimestamp="2026-01-26 00:11:40 +0000 UTC" firstStartedPulling="2026-01-26 00:11:51.402772312 +0000 UTC m=+168.631670921" lastFinishedPulling="2026-01-26 00:12:38.625030019 +0000 UTC m=+215.853928628" observedRunningTime="2026-01-26 00:12:41.410125922 +0000 UTC m=+218.639024531" watchObservedRunningTime="2026-01-26 00:12:41.41869986 +0000 UTC m=+218.647598479" Jan 26 00:12:41 crc kubenswrapper[5110]: I0126 00:12:41.939507 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=11.939468991 podStartE2EDuration="11.939468991s" podCreationTimestamp="2026-01-26 00:12:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:12:41.933382145 +0000 UTC m=+219.162280764" watchObservedRunningTime="2026-01-26 00:12:41.939468991 +0000 UTC m=+219.168367600" Jan 26 00:12:42 crc kubenswrapper[5110]: I0126 00:12:42.432713 5110 generic.go:358] "Generic (PLEG): container finished" podID="03706aeb-7193-414e-b54e-e860c08dd10f" containerID="926dcc552dc9031d03279dccc97bc52f08ed71a8026f3d83021a9e8c411f7b04" exitCode=0 Jan 26 00:12:42 crc kubenswrapper[5110]: I0126 00:12:42.433180 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"03706aeb-7193-414e-b54e-e860c08dd10f","Type":"ContainerDied","Data":"926dcc552dc9031d03279dccc97bc52f08ed71a8026f3d83021a9e8c411f7b04"} Jan 26 00:12:42 crc kubenswrapper[5110]: I0126 00:12:42.441148 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2w74" event={"ID":"20cad200-6605-4fac-b28d-b84cd2d74a89","Type":"ContainerStarted","Data":"6f53b90ec9de29e7f8617fcaff0bb12a1efcfb7bf0d08d73c5d8f9e78d45a4b7"} Jan 26 00:12:42 crc kubenswrapper[5110]: I0126 00:12:42.644213 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-2dlcz" Jan 26 00:12:42 crc kubenswrapper[5110]: I0126 00:12:42.644602 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2dlcz" Jan 26 00:12:42 crc kubenswrapper[5110]: I0126 00:12:42.730775 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ltvhd" Jan 26 00:12:42 crc kubenswrapper[5110]: I0126 00:12:42.730885 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-ltvhd" Jan 26 00:12:42 crc kubenswrapper[5110]: I0126 00:12:42.820891 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5g6gk" Jan 26 00:12:42 crc kubenswrapper[5110]: I0126 00:12:42.820935 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-5g6gk" Jan 26 00:12:43 crc kubenswrapper[5110]: I0126 00:12:43.519449 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ssztd" event={"ID":"4f9f1edc-9805-4975-8e01-a3428d24cc00","Type":"ContainerStarted","Data":"b7f4cff6bc6fc552cb3cf679a66c6a0ec0940c5335991075e3760dbd9b4b22a3"} Jan 26 00:12:43 crc kubenswrapper[5110]: I0126 00:12:43.684662 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c2w74" podStartSLOduration=16.026460062 podStartE2EDuration="1m5.684629468s" podCreationTimestamp="2026-01-26 00:11:38 +0000 UTC" firstStartedPulling="2026-01-26 00:11:49.092301181 +0000 UTC m=+166.321199790" lastFinishedPulling="2026-01-26 00:12:38.750470577 +0000 UTC m=+215.979369196" observedRunningTime="2026-01-26 00:12:42.494350296 +0000 UTC m=+219.723248935" watchObservedRunningTime="2026-01-26 00:12:43.684629468 +0000 UTC m=+220.913528077" Jan 26 00:12:43 crc kubenswrapper[5110]: I0126 00:12:43.684830 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ssztd" podStartSLOduration=17.366340003 podStartE2EDuration="1m4.684824774s" podCreationTimestamp="2026-01-26 00:11:39 +0000 UTC" firstStartedPulling="2026-01-26 00:11:51.426023094 +0000 UTC m=+168.654921693" lastFinishedPulling="2026-01-26 00:12:38.744507855 +0000 UTC m=+215.973406464" observedRunningTime="2026-01-26 00:12:43.676819833 +0000 UTC m=+220.905718452" watchObservedRunningTime="2026-01-26 00:12:43.684824774 +0000 UTC m=+220.913723383" Jan 26 00:12:44 crc kubenswrapper[5110]: I0126 00:12:44.542026 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k2xlk" event={"ID":"10001c1a-643a-49dd-b5ba-88376896f044","Type":"ContainerStarted","Data":"d8e34cf5197b0f48ee7f9f1e932ec3fd5ee08868015057e5ae93e9639cda6134"} Jan 26 00:12:44 crc kubenswrapper[5110]: I0126 00:12:44.596701 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k2xlk" podStartSLOduration=16.454768831 podStartE2EDuration="1m3.596677476s" podCreationTimestamp="2026-01-26 00:11:41 +0000 UTC" firstStartedPulling="2026-01-26 00:11:51.452113289 +0000 UTC m=+168.681011898" lastFinishedPulling="2026-01-26 00:12:38.594021934 +0000 UTC m=+215.822920543" observedRunningTime="2026-01-26 00:12:44.591764624 +0000 UTC m=+221.820663233" watchObservedRunningTime="2026-01-26 00:12:44.596677476 +0000 UTC m=+221.825576085" Jan 26 00:12:45 crc kubenswrapper[5110]: I0126 00:12:45.096508 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 26 00:12:45 crc kubenswrapper[5110]: I0126 00:12:45.188982 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03706aeb-7193-414e-b54e-e860c08dd10f-kubelet-dir\") pod \"03706aeb-7193-414e-b54e-e860c08dd10f\" (UID: \"03706aeb-7193-414e-b54e-e860c08dd10f\") " Jan 26 00:12:45 crc kubenswrapper[5110]: I0126 00:12:45.189106 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03706aeb-7193-414e-b54e-e860c08dd10f-kube-api-access\") pod \"03706aeb-7193-414e-b54e-e860c08dd10f\" (UID: \"03706aeb-7193-414e-b54e-e860c08dd10f\") " Jan 26 00:12:45 crc kubenswrapper[5110]: I0126 00:12:45.190477 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03706aeb-7193-414e-b54e-e860c08dd10f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "03706aeb-7193-414e-b54e-e860c08dd10f" (UID: "03706aeb-7193-414e-b54e-e860c08dd10f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:12:45 crc kubenswrapper[5110]: I0126 00:12:45.213930 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03706aeb-7193-414e-b54e-e860c08dd10f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "03706aeb-7193-414e-b54e-e860c08dd10f" (UID: "03706aeb-7193-414e-b54e-e860c08dd10f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:12:45 crc kubenswrapper[5110]: I0126 00:12:45.290654 5110 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03706aeb-7193-414e-b54e-e860c08dd10f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:45 crc kubenswrapper[5110]: I0126 00:12:45.290742 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03706aeb-7193-414e-b54e-e860c08dd10f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:45 crc kubenswrapper[5110]: I0126 00:12:45.510175 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-5g6gk" podUID="063f39d3-4de8-4c26-99fc-3148a8738541" containerName="registry-server" probeResult="failure" output=< Jan 26 00:12:45 crc kubenswrapper[5110]: timeout: failed to connect service ":50051" within 1s Jan 26 00:12:45 crc kubenswrapper[5110]: > Jan 26 00:12:45 crc kubenswrapper[5110]: I0126 00:12:45.519442 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-2dlcz" podUID="4276f77a-a21c-47f2-9902-e37c3ab865f5" containerName="registry-server" probeResult="failure" output=< Jan 26 00:12:45 crc kubenswrapper[5110]: timeout: failed to connect service ":50051" within 1s Jan 26 00:12:45 crc kubenswrapper[5110]: > Jan 26 00:12:45 crc kubenswrapper[5110]: I0126 00:12:45.548862 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-ltvhd" podUID="24c8aa2b-e367-4f40-92d2-ff31d5cbce2d" containerName="registry-server" probeResult="failure" output=< Jan 26 00:12:45 crc kubenswrapper[5110]: timeout: failed to connect service ":50051" within 1s Jan 26 00:12:45 crc kubenswrapper[5110]: > Jan 26 00:12:45 crc kubenswrapper[5110]: I0126 00:12:45.550010 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"03706aeb-7193-414e-b54e-e860c08dd10f","Type":"ContainerDied","Data":"e78430b01c3be9daf617466e278dcecc50fba93496091edec460922b7a7b8c30"} Jan 26 00:12:45 crc kubenswrapper[5110]: I0126 00:12:45.550097 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e78430b01c3be9daf617466e278dcecc50fba93496091edec460922b7a7b8c30" Jan 26 00:12:45 crc kubenswrapper[5110]: I0126 00:12:45.550126 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 26 00:12:45 crc kubenswrapper[5110]: I0126 00:12:45.552735 5110 generic.go:358] "Generic (PLEG): container finished" podID="168e3c2a-eb5e-451a-bba9-93e41fb1e958" containerID="d19b993fcac9451820a0f17a7577c0c857300e04e03c0cd2677397baefdb8506" exitCode=0 Jan 26 00:12:45 crc kubenswrapper[5110]: I0126 00:12:45.552847 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d6ths" event={"ID":"168e3c2a-eb5e-451a-bba9-93e41fb1e958","Type":"ContainerDied","Data":"d19b993fcac9451820a0f17a7577c0c857300e04e03c0cd2677397baefdb8506"} Jan 26 00:12:45 crc kubenswrapper[5110]: I0126 00:12:45.789449 5110 patch_prober.go:28] interesting pod/downloads-747b44746d-77449 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 26 00:12:45 crc kubenswrapper[5110]: I0126 00:12:45.789946 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-77449" podUID="2438617c-3b33-4b33-971c-afaab481cfe6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 26 00:12:46 crc kubenswrapper[5110]: I0126 00:12:46.566267 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d6ths" event={"ID":"168e3c2a-eb5e-451a-bba9-93e41fb1e958","Type":"ContainerStarted","Data":"b5cc787dd637231a5a60a4bbefe8f7a5e7c978f0b3196dbd616e501d6092c251"} Jan 26 00:12:46 crc kubenswrapper[5110]: I0126 00:12:46.568947 5110 generic.go:358] "Generic (PLEG): container finished" podID="96073c20-1f42-44fa-a922-6681dab132ef" containerID="356516d35afdbcf54fb2299cda85138053b5920dbc10f04005fc3e22fcfeeff6" exitCode=0 Jan 26 00:12:46 crc kubenswrapper[5110]: I0126 00:12:46.569035 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsblt" event={"ID":"96073c20-1f42-44fa-a922-6681dab132ef","Type":"ContainerDied","Data":"356516d35afdbcf54fb2299cda85138053b5920dbc10f04005fc3e22fcfeeff6"} Jan 26 00:12:47 crc kubenswrapper[5110]: I0126 00:12:47.744407 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d6ths" podStartSLOduration=19.432365897 podStartE2EDuration="1m6.744380268s" podCreationTimestamp="2026-01-26 00:11:41 +0000 UTC" firstStartedPulling="2026-01-26 00:11:51.482682272 +0000 UTC m=+168.711580881" lastFinishedPulling="2026-01-26 00:12:38.794696643 +0000 UTC m=+216.023595252" observedRunningTime="2026-01-26 00:12:47.742493324 +0000 UTC m=+224.971391943" watchObservedRunningTime="2026-01-26 00:12:47.744380268 +0000 UTC m=+224.973278877" Jan 26 00:12:48 crc kubenswrapper[5110]: I0126 00:12:48.585744 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsblt" event={"ID":"96073c20-1f42-44fa-a922-6681dab132ef","Type":"ContainerStarted","Data":"2f310706285222ac84db4ce9200a2a248a343fc45a458a20e36a1d62c6c6d45f"} Jan 26 00:12:49 crc kubenswrapper[5110]: I0126 00:12:49.748846 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wsblt" podStartSLOduration=20.302586902 podStartE2EDuration="1m7.748821335s" podCreationTimestamp="2026-01-26 00:11:42 +0000 UTC" firstStartedPulling="2026-01-26 00:11:51.313266184 +0000 UTC m=+168.542164793" lastFinishedPulling="2026-01-26 00:12:38.759500607 +0000 UTC m=+215.988399226" observedRunningTime="2026-01-26 00:12:49.746411446 +0000 UTC m=+226.975310065" watchObservedRunningTime="2026-01-26 00:12:49.748821335 +0000 UTC m=+226.977719954" Jan 26 00:12:50 crc kubenswrapper[5110]: I0126 00:12:50.989827 5110 patch_prober.go:28] interesting pod/downloads-747b44746d-77449 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 26 00:12:50 crc kubenswrapper[5110]: I0126 00:12:50.989927 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-77449" podUID="2438617c-3b33-4b33-971c-afaab481cfe6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 26 00:12:52 crc kubenswrapper[5110]: I0126 00:12:52.433086 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-c2w74" Jan 26 00:12:52 crc kubenswrapper[5110]: I0126 00:12:52.433536 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c2w74" Jan 26 00:12:52 crc kubenswrapper[5110]: I0126 00:12:52.674822 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c2w74" Jan 26 00:12:52 crc kubenswrapper[5110]: I0126 00:12:52.678489 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2dlcz" Jan 26 00:12:52 crc kubenswrapper[5110]: I0126 00:12:52.727137 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2dlcz" Jan 26 00:12:52 crc kubenswrapper[5110]: I0126 00:12:52.788104 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c2w74" Jan 26 00:12:52 crc kubenswrapper[5110]: I0126 00:12:52.795947 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ssztd" Jan 26 00:12:52 crc kubenswrapper[5110]: I0126 00:12:52.796117 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-ssztd" Jan 26 00:12:52 crc kubenswrapper[5110]: I0126 00:12:52.809891 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ltvhd" Jan 26 00:12:52 crc kubenswrapper[5110]: I0126 00:12:52.848998 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ssztd" Jan 26 00:12:52 crc kubenswrapper[5110]: I0126 00:12:52.881003 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ltvhd" Jan 26 00:12:52 crc kubenswrapper[5110]: I0126 00:12:52.897595 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5g6gk" Jan 26 00:12:52 crc kubenswrapper[5110]: I0126 00:12:52.947592 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5g6gk" Jan 26 00:12:53 crc kubenswrapper[5110]: I0126 00:12:53.073055 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k2xlk" Jan 26 00:12:53 crc kubenswrapper[5110]: I0126 00:12:53.073475 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-k2xlk" Jan 26 00:12:53 crc kubenswrapper[5110]: I0126 00:12:53.094564 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-d6ths" Jan 26 00:12:53 crc kubenswrapper[5110]: I0126 00:12:53.095020 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d6ths" Jan 26 00:12:53 crc kubenswrapper[5110]: I0126 00:12:53.110758 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wsblt" Jan 26 00:12:53 crc kubenswrapper[5110]: I0126 00:12:53.110843 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-wsblt" Jan 26 00:12:53 crc kubenswrapper[5110]: I0126 00:12:53.119130 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k2xlk" Jan 26 00:12:53 crc kubenswrapper[5110]: I0126 00:12:53.813228 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k2xlk" Jan 26 00:12:53 crc kubenswrapper[5110]: I0126 00:12:53.842633 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ssztd" Jan 26 00:12:54 crc kubenswrapper[5110]: I0126 00:12:54.142223 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d6ths" podUID="168e3c2a-eb5e-451a-bba9-93e41fb1e958" containerName="registry-server" probeResult="failure" output=< Jan 26 00:12:54 crc kubenswrapper[5110]: timeout: failed to connect service ":50051" within 1s Jan 26 00:12:54 crc kubenswrapper[5110]: > Jan 26 00:12:54 crc kubenswrapper[5110]: I0126 00:12:54.153230 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wsblt" podUID="96073c20-1f42-44fa-a922-6681dab132ef" containerName="registry-server" probeResult="failure" output=< Jan 26 00:12:54 crc kubenswrapper[5110]: timeout: failed to connect service ":50051" within 1s Jan 26 00:12:54 crc kubenswrapper[5110]: > Jan 26 00:12:55 crc kubenswrapper[5110]: I0126 00:12:55.207688 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ltvhd"] Jan 26 00:12:55 crc kubenswrapper[5110]: I0126 00:12:55.208494 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ltvhd" podUID="24c8aa2b-e367-4f40-92d2-ff31d5cbce2d" containerName="registry-server" containerID="cri-o://b9c3164d69f20e45ab0b07e351510d2f51c7ea50ba616474bfceb69b6a4057db" gracePeriod=2 Jan 26 00:12:55 crc kubenswrapper[5110]: I0126 00:12:55.788914 5110 patch_prober.go:28] interesting pod/downloads-747b44746d-77449 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 26 00:12:55 crc kubenswrapper[5110]: I0126 00:12:55.789072 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-77449" podUID="2438617c-3b33-4b33-971c-afaab481cfe6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 26 00:12:55 crc kubenswrapper[5110]: I0126 00:12:55.812386 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k2xlk"] Jan 26 00:12:56 crc kubenswrapper[5110]: I0126 00:12:56.778648 5110 generic.go:358] "Generic (PLEG): container finished" podID="24c8aa2b-e367-4f40-92d2-ff31d5cbce2d" containerID="b9c3164d69f20e45ab0b07e351510d2f51c7ea50ba616474bfceb69b6a4057db" exitCode=0 Jan 26 00:12:56 crc kubenswrapper[5110]: I0126 00:12:56.778726 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ltvhd" event={"ID":"24c8aa2b-e367-4f40-92d2-ff31d5cbce2d","Type":"ContainerDied","Data":"b9c3164d69f20e45ab0b07e351510d2f51c7ea50ba616474bfceb69b6a4057db"} Jan 26 00:12:56 crc kubenswrapper[5110]: I0126 00:12:56.779336 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k2xlk" podUID="10001c1a-643a-49dd-b5ba-88376896f044" containerName="registry-server" containerID="cri-o://d8e34cf5197b0f48ee7f9f1e932ec3fd5ee08868015057e5ae93e9639cda6134" gracePeriod=2 Jan 26 00:12:56 crc kubenswrapper[5110]: I0126 00:12:56.813330 5110 patch_prober.go:28] interesting pod/machine-config-daemon-c6tpr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:12:56 crc kubenswrapper[5110]: I0126 00:12:56.813451 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" podUID="f15bed73-d669-439f-9828-7b952d9bfe65" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:12:57 crc kubenswrapper[5110]: I0126 00:12:57.617232 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ssztd"] Jan 26 00:12:57 crc kubenswrapper[5110]: I0126 00:12:57.617974 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ssztd" podUID="4f9f1edc-9805-4975-8e01-a3428d24cc00" containerName="registry-server" containerID="cri-o://b7f4cff6bc6fc552cb3cf679a66c6a0ec0940c5335991075e3760dbd9b4b22a3" gracePeriod=2 Jan 26 00:12:58 crc kubenswrapper[5110]: I0126 00:12:58.334657 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ltvhd" Jan 26 00:12:58 crc kubenswrapper[5110]: I0126 00:12:58.447628 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzhvw\" (UniqueName: \"kubernetes.io/projected/24c8aa2b-e367-4f40-92d2-ff31d5cbce2d-kube-api-access-tzhvw\") pod \"24c8aa2b-e367-4f40-92d2-ff31d5cbce2d\" (UID: \"24c8aa2b-e367-4f40-92d2-ff31d5cbce2d\") " Jan 26 00:12:58 crc kubenswrapper[5110]: I0126 00:12:58.447707 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24c8aa2b-e367-4f40-92d2-ff31d5cbce2d-utilities\") pod \"24c8aa2b-e367-4f40-92d2-ff31d5cbce2d\" (UID: \"24c8aa2b-e367-4f40-92d2-ff31d5cbce2d\") " Jan 26 00:12:58 crc kubenswrapper[5110]: I0126 00:12:58.447865 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24c8aa2b-e367-4f40-92d2-ff31d5cbce2d-catalog-content\") pod \"24c8aa2b-e367-4f40-92d2-ff31d5cbce2d\" (UID: \"24c8aa2b-e367-4f40-92d2-ff31d5cbce2d\") " Jan 26 00:12:58 crc kubenswrapper[5110]: I0126 00:12:58.449415 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24c8aa2b-e367-4f40-92d2-ff31d5cbce2d-utilities" (OuterVolumeSpecName: "utilities") pod "24c8aa2b-e367-4f40-92d2-ff31d5cbce2d" (UID: "24c8aa2b-e367-4f40-92d2-ff31d5cbce2d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:12:58 crc kubenswrapper[5110]: I0126 00:12:58.450381 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24c8aa2b-e367-4f40-92d2-ff31d5cbce2d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:58 crc kubenswrapper[5110]: I0126 00:12:58.460089 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24c8aa2b-e367-4f40-92d2-ff31d5cbce2d-kube-api-access-tzhvw" (OuterVolumeSpecName: "kube-api-access-tzhvw") pod "24c8aa2b-e367-4f40-92d2-ff31d5cbce2d" (UID: "24c8aa2b-e367-4f40-92d2-ff31d5cbce2d"). InnerVolumeSpecName "kube-api-access-tzhvw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:12:58 crc kubenswrapper[5110]: I0126 00:12:58.501548 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24c8aa2b-e367-4f40-92d2-ff31d5cbce2d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24c8aa2b-e367-4f40-92d2-ff31d5cbce2d" (UID: "24c8aa2b-e367-4f40-92d2-ff31d5cbce2d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:12:58 crc kubenswrapper[5110]: I0126 00:12:58.552622 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tzhvw\" (UniqueName: \"kubernetes.io/projected/24c8aa2b-e367-4f40-92d2-ff31d5cbce2d-kube-api-access-tzhvw\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:58 crc kubenswrapper[5110]: I0126 00:12:58.552670 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24c8aa2b-e367-4f40-92d2-ff31d5cbce2d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:58 crc kubenswrapper[5110]: I0126 00:12:58.794979 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ltvhd" event={"ID":"24c8aa2b-e367-4f40-92d2-ff31d5cbce2d","Type":"ContainerDied","Data":"3fe162bddca2db0ac193f6255f9f759e282c846fa80b66d348708f624a93d157"} Jan 26 00:12:58 crc kubenswrapper[5110]: I0126 00:12:58.795027 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ltvhd" Jan 26 00:12:58 crc kubenswrapper[5110]: I0126 00:12:58.795060 5110 scope.go:117] "RemoveContainer" containerID="b9c3164d69f20e45ab0b07e351510d2f51c7ea50ba616474bfceb69b6a4057db" Jan 26 00:12:58 crc kubenswrapper[5110]: I0126 00:12:58.799487 5110 generic.go:358] "Generic (PLEG): container finished" podID="10001c1a-643a-49dd-b5ba-88376896f044" containerID="d8e34cf5197b0f48ee7f9f1e932ec3fd5ee08868015057e5ae93e9639cda6134" exitCode=0 Jan 26 00:12:58 crc kubenswrapper[5110]: I0126 00:12:58.799685 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k2xlk" event={"ID":"10001c1a-643a-49dd-b5ba-88376896f044","Type":"ContainerDied","Data":"d8e34cf5197b0f48ee7f9f1e932ec3fd5ee08868015057e5ae93e9639cda6134"} Jan 26 00:12:58 crc kubenswrapper[5110]: I0126 00:12:58.817832 5110 scope.go:117] "RemoveContainer" containerID="120a6b4dd59f9e2c64a2a6a289e0243c5daf676b52674d10b11d00fbe8bb91c6" Jan 26 00:12:58 crc kubenswrapper[5110]: I0126 00:12:58.829588 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ltvhd"] Jan 26 00:12:58 crc kubenswrapper[5110]: I0126 00:12:58.836234 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ltvhd"] Jan 26 00:12:58 crc kubenswrapper[5110]: I0126 00:12:58.862571 5110 scope.go:117] "RemoveContainer" containerID="a633ec49b1fb6a296a7bfb42cc57776595511abdca806c1afb00952be8ff873a" Jan 26 00:12:59 crc kubenswrapper[5110]: I0126 00:12:59.109718 5110 ???:1] "http: TLS handshake error from 192.168.126.11:47992: no serving certificate available for the kubelet" Jan 26 00:12:59 crc kubenswrapper[5110]: I0126 00:12:59.326065 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24c8aa2b-e367-4f40-92d2-ff31d5cbce2d" path="/var/lib/kubelet/pods/24c8aa2b-e367-4f40-92d2-ff31d5cbce2d/volumes" Jan 26 00:12:59 crc kubenswrapper[5110]: I0126 00:12:59.808893 5110 generic.go:358] "Generic (PLEG): container finished" podID="4f9f1edc-9805-4975-8e01-a3428d24cc00" containerID="b7f4cff6bc6fc552cb3cf679a66c6a0ec0940c5335991075e3760dbd9b4b22a3" exitCode=0 Jan 26 00:12:59 crc kubenswrapper[5110]: I0126 00:12:59.809209 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ssztd" event={"ID":"4f9f1edc-9805-4975-8e01-a3428d24cc00","Type":"ContainerDied","Data":"b7f4cff6bc6fc552cb3cf679a66c6a0ec0940c5335991075e3760dbd9b4b22a3"} Jan 26 00:12:59 crc kubenswrapper[5110]: I0126 00:12:59.812460 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k2xlk" event={"ID":"10001c1a-643a-49dd-b5ba-88376896f044","Type":"ContainerDied","Data":"a146d7ebbeb2f70c4d0ada534a15cdc2ecaf5c1baa76a98a96a1e169773f2d9a"} Jan 26 00:12:59 crc kubenswrapper[5110]: I0126 00:12:59.812494 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a146d7ebbeb2f70c4d0ada534a15cdc2ecaf5c1baa76a98a96a1e169773f2d9a" Jan 26 00:12:59 crc kubenswrapper[5110]: I0126 00:12:59.832482 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k2xlk" Jan 26 00:12:59 crc kubenswrapper[5110]: I0126 00:12:59.876745 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2m6xq\" (UniqueName: \"kubernetes.io/projected/10001c1a-643a-49dd-b5ba-88376896f044-kube-api-access-2m6xq\") pod \"10001c1a-643a-49dd-b5ba-88376896f044\" (UID: \"10001c1a-643a-49dd-b5ba-88376896f044\") " Jan 26 00:12:59 crc kubenswrapper[5110]: I0126 00:12:59.876924 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10001c1a-643a-49dd-b5ba-88376896f044-catalog-content\") pod \"10001c1a-643a-49dd-b5ba-88376896f044\" (UID: \"10001c1a-643a-49dd-b5ba-88376896f044\") " Jan 26 00:12:59 crc kubenswrapper[5110]: I0126 00:12:59.877031 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10001c1a-643a-49dd-b5ba-88376896f044-utilities\") pod \"10001c1a-643a-49dd-b5ba-88376896f044\" (UID: \"10001c1a-643a-49dd-b5ba-88376896f044\") " Jan 26 00:12:59 crc kubenswrapper[5110]: I0126 00:12:59.878417 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10001c1a-643a-49dd-b5ba-88376896f044-utilities" (OuterVolumeSpecName: "utilities") pod "10001c1a-643a-49dd-b5ba-88376896f044" (UID: "10001c1a-643a-49dd-b5ba-88376896f044"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:12:59 crc kubenswrapper[5110]: I0126 00:12:59.882399 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10001c1a-643a-49dd-b5ba-88376896f044-kube-api-access-2m6xq" (OuterVolumeSpecName: "kube-api-access-2m6xq") pod "10001c1a-643a-49dd-b5ba-88376896f044" (UID: "10001c1a-643a-49dd-b5ba-88376896f044"). InnerVolumeSpecName "kube-api-access-2m6xq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:12:59 crc kubenswrapper[5110]: I0126 00:12:59.895658 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10001c1a-643a-49dd-b5ba-88376896f044-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10001c1a-643a-49dd-b5ba-88376896f044" (UID: "10001c1a-643a-49dd-b5ba-88376896f044"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:12:59 crc kubenswrapper[5110]: I0126 00:12:59.978699 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2m6xq\" (UniqueName: \"kubernetes.io/projected/10001c1a-643a-49dd-b5ba-88376896f044-kube-api-access-2m6xq\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:59 crc kubenswrapper[5110]: I0126 00:12:59.978768 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10001c1a-643a-49dd-b5ba-88376896f044-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:59 crc kubenswrapper[5110]: I0126 00:12:59.978787 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10001c1a-643a-49dd-b5ba-88376896f044-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:00 crc kubenswrapper[5110]: I0126 00:13:00.460371 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ssztd" Jan 26 00:13:00 crc kubenswrapper[5110]: I0126 00:13:00.593053 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjdlf\" (UniqueName: \"kubernetes.io/projected/4f9f1edc-9805-4975-8e01-a3428d24cc00-kube-api-access-tjdlf\") pod \"4f9f1edc-9805-4975-8e01-a3428d24cc00\" (UID: \"4f9f1edc-9805-4975-8e01-a3428d24cc00\") " Jan 26 00:13:00 crc kubenswrapper[5110]: I0126 00:13:00.593231 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f9f1edc-9805-4975-8e01-a3428d24cc00-utilities\") pod \"4f9f1edc-9805-4975-8e01-a3428d24cc00\" (UID: \"4f9f1edc-9805-4975-8e01-a3428d24cc00\") " Jan 26 00:13:00 crc kubenswrapper[5110]: I0126 00:13:00.593334 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f9f1edc-9805-4975-8e01-a3428d24cc00-catalog-content\") pod \"4f9f1edc-9805-4975-8e01-a3428d24cc00\" (UID: \"4f9f1edc-9805-4975-8e01-a3428d24cc00\") " Jan 26 00:13:00 crc kubenswrapper[5110]: I0126 00:13:00.594254 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f9f1edc-9805-4975-8e01-a3428d24cc00-utilities" (OuterVolumeSpecName: "utilities") pod "4f9f1edc-9805-4975-8e01-a3428d24cc00" (UID: "4f9f1edc-9805-4975-8e01-a3428d24cc00"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:13:00 crc kubenswrapper[5110]: I0126 00:13:00.600618 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f9f1edc-9805-4975-8e01-a3428d24cc00-kube-api-access-tjdlf" (OuterVolumeSpecName: "kube-api-access-tjdlf") pod "4f9f1edc-9805-4975-8e01-a3428d24cc00" (UID: "4f9f1edc-9805-4975-8e01-a3428d24cc00"). InnerVolumeSpecName "kube-api-access-tjdlf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:13:00 crc kubenswrapper[5110]: I0126 00:13:00.624565 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f9f1edc-9805-4975-8e01-a3428d24cc00-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4f9f1edc-9805-4975-8e01-a3428d24cc00" (UID: "4f9f1edc-9805-4975-8e01-a3428d24cc00"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:13:00 crc kubenswrapper[5110]: I0126 00:13:00.694926 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f9f1edc-9805-4975-8e01-a3428d24cc00-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:00 crc kubenswrapper[5110]: I0126 00:13:00.695220 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f9f1edc-9805-4975-8e01-a3428d24cc00-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:00 crc kubenswrapper[5110]: I0126 00:13:00.695304 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tjdlf\" (UniqueName: \"kubernetes.io/projected/4f9f1edc-9805-4975-8e01-a3428d24cc00-kube-api-access-tjdlf\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:00 crc kubenswrapper[5110]: I0126 00:13:00.820895 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ssztd" Jan 26 00:13:00 crc kubenswrapper[5110]: I0126 00:13:00.820895 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k2xlk" Jan 26 00:13:00 crc kubenswrapper[5110]: I0126 00:13:00.820888 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ssztd" event={"ID":"4f9f1edc-9805-4975-8e01-a3428d24cc00","Type":"ContainerDied","Data":"817c9ccabaef4b0a230f8773ff7f8a82b43fb582a546e3a28060aad6aa01dd8a"} Jan 26 00:13:00 crc kubenswrapper[5110]: I0126 00:13:00.824245 5110 scope.go:117] "RemoveContainer" containerID="b7f4cff6bc6fc552cb3cf679a66c6a0ec0940c5335991075e3760dbd9b4b22a3" Jan 26 00:13:00 crc kubenswrapper[5110]: I0126 00:13:00.846256 5110 scope.go:117] "RemoveContainer" containerID="fc275dd5335d17cd8d660301f58c7cef231beee87d7c9838341849315b5fe359" Jan 26 00:13:00 crc kubenswrapper[5110]: I0126 00:13:00.854729 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ssztd"] Jan 26 00:13:00 crc kubenswrapper[5110]: I0126 00:13:00.866608 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ssztd"] Jan 26 00:13:00 crc kubenswrapper[5110]: I0126 00:13:00.874331 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k2xlk"] Jan 26 00:13:00 crc kubenswrapper[5110]: I0126 00:13:00.877012 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k2xlk"] Jan 26 00:13:00 crc kubenswrapper[5110]: I0126 00:13:00.884413 5110 scope.go:117] "RemoveContainer" containerID="4364a397d7c04cbd52fc6ca0232c219109b264ea769149e1b006c8c223917ff0" Jan 26 00:13:00 crc kubenswrapper[5110]: I0126 00:13:00.989821 5110 patch_prober.go:28] interesting pod/downloads-747b44746d-77449 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 26 00:13:00 crc kubenswrapper[5110]: I0126 00:13:00.989922 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-77449" podUID="2438617c-3b33-4b33-971c-afaab481cfe6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 26 00:13:01 crc kubenswrapper[5110]: I0126 00:13:01.327013 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10001c1a-643a-49dd-b5ba-88376896f044" path="/var/lib/kubelet/pods/10001c1a-643a-49dd-b5ba-88376896f044/volumes" Jan 26 00:13:01 crc kubenswrapper[5110]: I0126 00:13:01.327713 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f9f1edc-9805-4975-8e01-a3428d24cc00" path="/var/lib/kubelet/pods/4f9f1edc-9805-4975-8e01-a3428d24cc00/volumes" Jan 26 00:13:03 crc kubenswrapper[5110]: I0126 00:13:03.393572 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wsblt" Jan 26 00:13:03 crc kubenswrapper[5110]: I0126 00:13:03.396425 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d6ths" Jan 26 00:13:03 crc kubenswrapper[5110]: I0126 00:13:03.445777 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wsblt" Jan 26 00:13:03 crc kubenswrapper[5110]: I0126 00:13:03.455158 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d6ths" Jan 26 00:13:06 crc kubenswrapper[5110]: I0126 00:13:06.607195 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wsblt"] Jan 26 00:13:06 crc kubenswrapper[5110]: I0126 00:13:06.609214 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wsblt" podUID="96073c20-1f42-44fa-a922-6681dab132ef" containerName="registry-server" containerID="cri-o://2f310706285222ac84db4ce9200a2a248a343fc45a458a20e36a1d62c6c6d45f" gracePeriod=2 Jan 26 00:13:09 crc kubenswrapper[5110]: I0126 00:13:09.901878 5110 generic.go:358] "Generic (PLEG): container finished" podID="96073c20-1f42-44fa-a922-6681dab132ef" containerID="2f310706285222ac84db4ce9200a2a248a343fc45a458a20e36a1d62c6c6d45f" exitCode=0 Jan 26 00:13:09 crc kubenswrapper[5110]: I0126 00:13:09.901966 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsblt" event={"ID":"96073c20-1f42-44fa-a922-6681dab132ef","Type":"ContainerDied","Data":"2f310706285222ac84db4ce9200a2a248a343fc45a458a20e36a1d62c6c6d45f"} Jan 26 00:13:11 crc kubenswrapper[5110]: I0126 00:13:11.011311 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-77449" Jan 26 00:13:11 crc kubenswrapper[5110]: I0126 00:13:11.341891 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wsblt" Jan 26 00:13:11 crc kubenswrapper[5110]: I0126 00:13:11.464503 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96073c20-1f42-44fa-a922-6681dab132ef-utilities\") pod \"96073c20-1f42-44fa-a922-6681dab132ef\" (UID: \"96073c20-1f42-44fa-a922-6681dab132ef\") " Jan 26 00:13:11 crc kubenswrapper[5110]: I0126 00:13:11.464685 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfhr4\" (UniqueName: \"kubernetes.io/projected/96073c20-1f42-44fa-a922-6681dab132ef-kube-api-access-cfhr4\") pod \"96073c20-1f42-44fa-a922-6681dab132ef\" (UID: \"96073c20-1f42-44fa-a922-6681dab132ef\") " Jan 26 00:13:11 crc kubenswrapper[5110]: I0126 00:13:11.464783 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96073c20-1f42-44fa-a922-6681dab132ef-catalog-content\") pod \"96073c20-1f42-44fa-a922-6681dab132ef\" (UID: \"96073c20-1f42-44fa-a922-6681dab132ef\") " Jan 26 00:13:11 crc kubenswrapper[5110]: I0126 00:13:11.465855 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96073c20-1f42-44fa-a922-6681dab132ef-utilities" (OuterVolumeSpecName: "utilities") pod "96073c20-1f42-44fa-a922-6681dab132ef" (UID: "96073c20-1f42-44fa-a922-6681dab132ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:13:11 crc kubenswrapper[5110]: I0126 00:13:11.476619 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96073c20-1f42-44fa-a922-6681dab132ef-kube-api-access-cfhr4" (OuterVolumeSpecName: "kube-api-access-cfhr4") pod "96073c20-1f42-44fa-a922-6681dab132ef" (UID: "96073c20-1f42-44fa-a922-6681dab132ef"). InnerVolumeSpecName "kube-api-access-cfhr4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:13:11 crc kubenswrapper[5110]: I0126 00:13:11.565541 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96073c20-1f42-44fa-a922-6681dab132ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "96073c20-1f42-44fa-a922-6681dab132ef" (UID: "96073c20-1f42-44fa-a922-6681dab132ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:13:11 crc kubenswrapper[5110]: I0126 00:13:11.567177 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96073c20-1f42-44fa-a922-6681dab132ef-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:11 crc kubenswrapper[5110]: I0126 00:13:11.567197 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cfhr4\" (UniqueName: \"kubernetes.io/projected/96073c20-1f42-44fa-a922-6681dab132ef-kube-api-access-cfhr4\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:11 crc kubenswrapper[5110]: I0126 00:13:11.567207 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96073c20-1f42-44fa-a922-6681dab132ef-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:11 crc kubenswrapper[5110]: I0126 00:13:11.917616 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsblt" event={"ID":"96073c20-1f42-44fa-a922-6681dab132ef","Type":"ContainerDied","Data":"1ea07b88848eedd72f5d3c5365a66f9edfb36655e2a77cbc3c9333a9d85944b2"} Jan 26 00:13:11 crc kubenswrapper[5110]: I0126 00:13:11.918215 5110 scope.go:117] "RemoveContainer" containerID="2f310706285222ac84db4ce9200a2a248a343fc45a458a20e36a1d62c6c6d45f" Jan 26 00:13:11 crc kubenswrapper[5110]: I0126 00:13:11.917681 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wsblt" Jan 26 00:13:11 crc kubenswrapper[5110]: I0126 00:13:11.939873 5110 scope.go:117] "RemoveContainer" containerID="356516d35afdbcf54fb2299cda85138053b5920dbc10f04005fc3e22fcfeeff6" Jan 26 00:13:11 crc kubenswrapper[5110]: I0126 00:13:11.948026 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wsblt"] Jan 26 00:13:11 crc kubenswrapper[5110]: I0126 00:13:11.952436 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wsblt"] Jan 26 00:13:11 crc kubenswrapper[5110]: I0126 00:13:11.982254 5110 scope.go:117] "RemoveContainer" containerID="60f756369d575eb8d001fb9559365610b3025359a9a19634d0d08ad74aa47c53" Jan 26 00:13:13 crc kubenswrapper[5110]: I0126 00:13:13.326101 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96073c20-1f42-44fa-a922-6681dab132ef" path="/var/lib/kubelet/pods/96073c20-1f42-44fa-a922-6681dab132ef/volumes" Jan 26 00:13:16 crc kubenswrapper[5110]: I0126 00:13:16.269898 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-tg86c"] Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.744913 5110 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.747098 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="10001c1a-643a-49dd-b5ba-88376896f044" containerName="extract-utilities" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.747133 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="10001c1a-643a-49dd-b5ba-88376896f044" containerName="extract-utilities" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.747169 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="10001c1a-643a-49dd-b5ba-88376896f044" containerName="registry-server" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.747177 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="10001c1a-643a-49dd-b5ba-88376896f044" containerName="registry-server" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.747207 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4f9f1edc-9805-4975-8e01-a3428d24cc00" containerName="registry-server" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.747216 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f9f1edc-9805-4975-8e01-a3428d24cc00" containerName="registry-server" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.747228 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="96073c20-1f42-44fa-a922-6681dab132ef" containerName="extract-content" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.747235 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="96073c20-1f42-44fa-a922-6681dab132ef" containerName="extract-content" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.747249 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="96073c20-1f42-44fa-a922-6681dab132ef" containerName="extract-utilities" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.747256 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="96073c20-1f42-44fa-a922-6681dab132ef" containerName="extract-utilities" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.747267 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="24c8aa2b-e367-4f40-92d2-ff31d5cbce2d" containerName="extract-content" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.747274 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="24c8aa2b-e367-4f40-92d2-ff31d5cbce2d" containerName="extract-content" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.747290 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="24c8aa2b-e367-4f40-92d2-ff31d5cbce2d" containerName="extract-utilities" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.747297 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="24c8aa2b-e367-4f40-92d2-ff31d5cbce2d" containerName="extract-utilities" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.747311 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="96073c20-1f42-44fa-a922-6681dab132ef" containerName="registry-server" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.747320 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="96073c20-1f42-44fa-a922-6681dab132ef" containerName="registry-server" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.747333 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8c8a6437-b0cb-4825-999f-6e523fd394e9" containerName="kube-multus-additional-cni-plugins" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.747355 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c8a6437-b0cb-4825-999f-6e523fd394e9" containerName="kube-multus-additional-cni-plugins" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.747369 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="03706aeb-7193-414e-b54e-e860c08dd10f" containerName="pruner" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.747377 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="03706aeb-7193-414e-b54e-e860c08dd10f" containerName="pruner" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.747391 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4f9f1edc-9805-4975-8e01-a3428d24cc00" containerName="extract-content" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.747398 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f9f1edc-9805-4975-8e01-a3428d24cc00" containerName="extract-content" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.747406 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="10001c1a-643a-49dd-b5ba-88376896f044" containerName="extract-content" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.747413 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="10001c1a-643a-49dd-b5ba-88376896f044" containerName="extract-content" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.747424 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4f9f1edc-9805-4975-8e01-a3428d24cc00" containerName="extract-utilities" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.747429 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f9f1edc-9805-4975-8e01-a3428d24cc00" containerName="extract-utilities" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.747435 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="24c8aa2b-e367-4f40-92d2-ff31d5cbce2d" containerName="registry-server" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.747440 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="24c8aa2b-e367-4f40-92d2-ff31d5cbce2d" containerName="registry-server" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.747572 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="96073c20-1f42-44fa-a922-6681dab132ef" containerName="registry-server" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.747593 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="8c8a6437-b0cb-4825-999f-6e523fd394e9" containerName="kube-multus-additional-cni-plugins" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.747628 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="03706aeb-7193-414e-b54e-e860c08dd10f" containerName="pruner" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.747655 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="4f9f1edc-9805-4975-8e01-a3428d24cc00" containerName="registry-server" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.747672 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="24c8aa2b-e367-4f40-92d2-ff31d5cbce2d" containerName="registry-server" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.747680 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="10001c1a-643a-49dd-b5ba-88376896f044" containerName="registry-server" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.853947 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.854940 5110 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.855013 5110 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.855626 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://fc0c2347169345d3886bfc369c472fa0e2cf381c3b596a46b7de48d84e955504" gracePeriod=15 Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.855673 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://44222dc6a7066064989d8823f6c0f556f4d4d8cac2f2b4ab1462cbb51db8ff07" gracePeriod=15 Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.855647 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://ca0a9c3e32a579de04d542b7d2c66ce1ea575b229700d8fdae6cdb109b6ab46e" gracePeriod=15 Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.855768 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://edc58dbb97e7a54abf71095739d515edea8205bf2c5e4008b22a69d40b83cb2f" gracePeriod=15 Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.856327 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.856363 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.856381 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.856389 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.856398 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.856293 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://a67e63f2b2d217c10ec50b836c9cca85c134aef230c27dbb1ae8139a6dddd72b" gracePeriod=15 Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.856405 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.856494 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.856504 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.856530 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.856539 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.856556 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.856564 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.856582 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.856591 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.856612 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.856620 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.856629 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.856636 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.856777 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.856833 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.856846 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.856855 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.856882 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.856893 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.856906 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.857023 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.857034 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.857239 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.857284 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.932118 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.932185 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.932240 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.932264 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.932278 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.958695 5110 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:22 crc kubenswrapper[5110]: E0126 00:13:22.959606 5110 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.144:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.997917 5110 generic.go:358] "Generic (PLEG): container finished" podID="497eadd3-187e-4dfb-82c3-fad0d59eb723" containerID="4900e5e2cbcf3819d08be0339eb469a67850f5e241a73bd277e5ef98a14c61ab" exitCode=0 Jan 26 00:13:22 crc kubenswrapper[5110]: I0126 00:13:22.998075 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29489760-jjbnv" event={"ID":"497eadd3-187e-4dfb-82c3-fad0d59eb723","Type":"ContainerDied","Data":"4900e5e2cbcf3819d08be0339eb469a67850f5e241a73bd277e5ef98a14c61ab"} Jan 26 00:13:23 crc kubenswrapper[5110]: I0126 00:13:23.000041 5110 status_manager.go:895] "Failed to get status for pod" podUID="497eadd3-187e-4dfb-82c3-fad0d59eb723" pod="openshift-image-registry/image-pruner-29489760-jjbnv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-jjbnv\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:23 crc kubenswrapper[5110]: I0126 00:13:23.000115 5110 generic.go:358] "Generic (PLEG): container finished" podID="c1dd88ef-db06-41bc-8be3-53730c6fa57f" containerID="dba67fdea6a3493e7065b7aefdcceb1064607698419a340817355ce7c4cb6516" exitCode=0 Jan 26 00:13:23 crc kubenswrapper[5110]: I0126 00:13:23.000270 5110 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:23 crc kubenswrapper[5110]: I0126 00:13:23.000347 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"c1dd88ef-db06-41bc-8be3-53730c6fa57f","Type":"ContainerDied","Data":"dba67fdea6a3493e7065b7aefdcceb1064607698419a340817355ce7c4cb6516"} Jan 26 00:13:23 crc kubenswrapper[5110]: I0126 00:13:23.001149 5110 status_manager.go:895] "Failed to get status for pod" podUID="c1dd88ef-db06-41bc-8be3-53730c6fa57f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:23 crc kubenswrapper[5110]: I0126 00:13:23.001353 5110 status_manager.go:895] "Failed to get status for pod" podUID="497eadd3-187e-4dfb-82c3-fad0d59eb723" pod="openshift-image-registry/image-pruner-29489760-jjbnv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-jjbnv\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:23 crc kubenswrapper[5110]: I0126 00:13:23.001533 5110 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:23 crc kubenswrapper[5110]: I0126 00:13:23.003632 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 26 00:13:23 crc kubenswrapper[5110]: I0126 00:13:23.007149 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 26 00:13:23 crc kubenswrapper[5110]: I0126 00:13:23.007954 5110 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="fc0c2347169345d3886bfc369c472fa0e2cf381c3b596a46b7de48d84e955504" exitCode=0 Jan 26 00:13:23 crc kubenswrapper[5110]: I0126 00:13:23.007985 5110 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="edc58dbb97e7a54abf71095739d515edea8205bf2c5e4008b22a69d40b83cb2f" exitCode=0 Jan 26 00:13:23 crc kubenswrapper[5110]: I0126 00:13:23.007994 5110 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="44222dc6a7066064989d8823f6c0f556f4d4d8cac2f2b4ab1462cbb51db8ff07" exitCode=2 Jan 26 00:13:23 crc kubenswrapper[5110]: I0126 00:13:23.034074 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:23 crc kubenswrapper[5110]: I0126 00:13:23.034365 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:23 crc kubenswrapper[5110]: I0126 00:13:23.034439 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:23 crc kubenswrapper[5110]: I0126 00:13:23.034250 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:23 crc kubenswrapper[5110]: I0126 00:13:23.034612 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:23 crc kubenswrapper[5110]: I0126 00:13:23.034715 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:23 crc kubenswrapper[5110]: I0126 00:13:23.034847 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:23 crc kubenswrapper[5110]: I0126 00:13:23.034991 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:23 crc kubenswrapper[5110]: I0126 00:13:23.035100 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:23 crc kubenswrapper[5110]: I0126 00:13:23.035177 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:23 crc kubenswrapper[5110]: I0126 00:13:23.035046 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:23 crc kubenswrapper[5110]: I0126 00:13:23.035320 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:23 crc kubenswrapper[5110]: I0126 00:13:23.035430 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:23 crc kubenswrapper[5110]: I0126 00:13:23.035652 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:23 crc kubenswrapper[5110]: I0126 00:13:23.035838 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:23 crc kubenswrapper[5110]: I0126 00:13:23.136981 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:23 crc kubenswrapper[5110]: I0126 00:13:23.137045 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:23 crc kubenswrapper[5110]: I0126 00:13:23.137065 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:23 crc kubenswrapper[5110]: I0126 00:13:23.137093 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:23 crc kubenswrapper[5110]: I0126 00:13:23.137165 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:23 crc kubenswrapper[5110]: I0126 00:13:23.137215 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:23 crc kubenswrapper[5110]: I0126 00:13:23.137263 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:23 crc kubenswrapper[5110]: I0126 00:13:23.137580 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:23 crc kubenswrapper[5110]: I0126 00:13:23.137940 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:23 crc kubenswrapper[5110]: I0126 00:13:23.138017 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:23 crc kubenswrapper[5110]: I0126 00:13:23.260961 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:23 crc kubenswrapper[5110]: E0126 00:13:23.298661 5110 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.144:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e1f8f26fd0936 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:13:23.298085174 +0000 UTC m=+260.526983783,LastTimestamp:2026-01-26 00:13:23.298085174 +0000 UTC m=+260.526983783,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:13:23 crc kubenswrapper[5110]: I0126 00:13:23.321853 5110 status_manager.go:895] "Failed to get status for pod" podUID="c1dd88ef-db06-41bc-8be3-53730c6fa57f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:23 crc kubenswrapper[5110]: I0126 00:13:23.322113 5110 status_manager.go:895] "Failed to get status for pod" podUID="497eadd3-187e-4dfb-82c3-fad0d59eb723" pod="openshift-image-registry/image-pruner-29489760-jjbnv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-jjbnv\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:23 crc kubenswrapper[5110]: I0126 00:13:23.322325 5110 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:24 crc kubenswrapper[5110]: I0126 00:13:24.022178 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 26 00:13:24 crc kubenswrapper[5110]: I0126 00:13:24.024472 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 26 00:13:24 crc kubenswrapper[5110]: I0126 00:13:24.026001 5110 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="ca0a9c3e32a579de04d542b7d2c66ce1ea575b229700d8fdae6cdb109b6ab46e" exitCode=0 Jan 26 00:13:24 crc kubenswrapper[5110]: I0126 00:13:24.026099 5110 scope.go:117] "RemoveContainer" containerID="3202209affc7d7c4869fa3cbbd7b3a166dee850e7789b407826358743c13e508" Jan 26 00:13:24 crc kubenswrapper[5110]: I0126 00:13:24.035637 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"077aa94e712614c3555071929003ecea892ce2f732994a30c717a9ab162b7927"} Jan 26 00:13:24 crc kubenswrapper[5110]: I0126 00:13:24.035790 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"2fd0d16e84ff65d4c5aaaf6bda9824ae092a9f172d48a31a47dc0f7e2b21aec6"} Jan 26 00:13:24 crc kubenswrapper[5110]: I0126 00:13:24.036273 5110 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:24 crc kubenswrapper[5110]: E0126 00:13:24.037669 5110 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.144:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:24 crc kubenswrapper[5110]: I0126 00:13:24.037707 5110 status_manager.go:895] "Failed to get status for pod" podUID="c1dd88ef-db06-41bc-8be3-53730c6fa57f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:24 crc kubenswrapper[5110]: I0126 00:13:24.038505 5110 status_manager.go:895] "Failed to get status for pod" podUID="497eadd3-187e-4dfb-82c3-fad0d59eb723" pod="openshift-image-registry/image-pruner-29489760-jjbnv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-jjbnv\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:24 crc kubenswrapper[5110]: I0126 00:13:24.335327 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:13:24 crc kubenswrapper[5110]: I0126 00:13:24.336028 5110 status_manager.go:895] "Failed to get status for pod" podUID="497eadd3-187e-4dfb-82c3-fad0d59eb723" pod="openshift-image-registry/image-pruner-29489760-jjbnv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-jjbnv\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:24 crc kubenswrapper[5110]: I0126 00:13:24.336245 5110 status_manager.go:895] "Failed to get status for pod" podUID="c1dd88ef-db06-41bc-8be3-53730c6fa57f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:24 crc kubenswrapper[5110]: I0126 00:13:24.339302 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29489760-jjbnv" Jan 26 00:13:24 crc kubenswrapper[5110]: I0126 00:13:24.339585 5110 status_manager.go:895] "Failed to get status for pod" podUID="c1dd88ef-db06-41bc-8be3-53730c6fa57f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:24 crc kubenswrapper[5110]: I0126 00:13:24.339776 5110 status_manager.go:895] "Failed to get status for pod" podUID="497eadd3-187e-4dfb-82c3-fad0d59eb723" pod="openshift-image-registry/image-pruner-29489760-jjbnv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-jjbnv\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:24 crc kubenswrapper[5110]: I0126 00:13:24.459425 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c1dd88ef-db06-41bc-8be3-53730c6fa57f-var-lock\") pod \"c1dd88ef-db06-41bc-8be3-53730c6fa57f\" (UID: \"c1dd88ef-db06-41bc-8be3-53730c6fa57f\") " Jan 26 00:13:24 crc kubenswrapper[5110]: I0126 00:13:24.459502 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1dd88ef-db06-41bc-8be3-53730c6fa57f-kube-api-access\") pod \"c1dd88ef-db06-41bc-8be3-53730c6fa57f\" (UID: \"c1dd88ef-db06-41bc-8be3-53730c6fa57f\") " Jan 26 00:13:24 crc kubenswrapper[5110]: I0126 00:13:24.459618 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6m5r\" (UniqueName: \"kubernetes.io/projected/497eadd3-187e-4dfb-82c3-fad0d59eb723-kube-api-access-b6m5r\") pod \"497eadd3-187e-4dfb-82c3-fad0d59eb723\" (UID: \"497eadd3-187e-4dfb-82c3-fad0d59eb723\") " Jan 26 00:13:24 crc kubenswrapper[5110]: I0126 00:13:24.459618 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1dd88ef-db06-41bc-8be3-53730c6fa57f-var-lock" (OuterVolumeSpecName: "var-lock") pod "c1dd88ef-db06-41bc-8be3-53730c6fa57f" (UID: "c1dd88ef-db06-41bc-8be3-53730c6fa57f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:13:24 crc kubenswrapper[5110]: I0126 00:13:24.460576 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c1dd88ef-db06-41bc-8be3-53730c6fa57f-kubelet-dir\") pod \"c1dd88ef-db06-41bc-8be3-53730c6fa57f\" (UID: \"c1dd88ef-db06-41bc-8be3-53730c6fa57f\") " Jan 26 00:13:24 crc kubenswrapper[5110]: I0126 00:13:24.460664 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/497eadd3-187e-4dfb-82c3-fad0d59eb723-serviceca\") pod \"497eadd3-187e-4dfb-82c3-fad0d59eb723\" (UID: \"497eadd3-187e-4dfb-82c3-fad0d59eb723\") " Jan 26 00:13:24 crc kubenswrapper[5110]: I0126 00:13:24.460704 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1dd88ef-db06-41bc-8be3-53730c6fa57f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c1dd88ef-db06-41bc-8be3-53730c6fa57f" (UID: "c1dd88ef-db06-41bc-8be3-53730c6fa57f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:13:24 crc kubenswrapper[5110]: I0126 00:13:24.460972 5110 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c1dd88ef-db06-41bc-8be3-53730c6fa57f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:24 crc kubenswrapper[5110]: I0126 00:13:24.460994 5110 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c1dd88ef-db06-41bc-8be3-53730c6fa57f-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:24 crc kubenswrapper[5110]: I0126 00:13:24.461515 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/497eadd3-187e-4dfb-82c3-fad0d59eb723-serviceca" (OuterVolumeSpecName: "serviceca") pod "497eadd3-187e-4dfb-82c3-fad0d59eb723" (UID: "497eadd3-187e-4dfb-82c3-fad0d59eb723"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:13:24 crc kubenswrapper[5110]: I0126 00:13:24.466662 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1dd88ef-db06-41bc-8be3-53730c6fa57f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c1dd88ef-db06-41bc-8be3-53730c6fa57f" (UID: "c1dd88ef-db06-41bc-8be3-53730c6fa57f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:13:24 crc kubenswrapper[5110]: I0126 00:13:24.467122 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/497eadd3-187e-4dfb-82c3-fad0d59eb723-kube-api-access-b6m5r" (OuterVolumeSpecName: "kube-api-access-b6m5r") pod "497eadd3-187e-4dfb-82c3-fad0d59eb723" (UID: "497eadd3-187e-4dfb-82c3-fad0d59eb723"). InnerVolumeSpecName "kube-api-access-b6m5r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:13:24 crc kubenswrapper[5110]: I0126 00:13:24.562041 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1dd88ef-db06-41bc-8be3-53730c6fa57f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:24 crc kubenswrapper[5110]: I0126 00:13:24.562096 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b6m5r\" (UniqueName: \"kubernetes.io/projected/497eadd3-187e-4dfb-82c3-fad0d59eb723-kube-api-access-b6m5r\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:24 crc kubenswrapper[5110]: I0126 00:13:24.562114 5110 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/497eadd3-187e-4dfb-82c3-fad0d59eb723-serviceca\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:25 crc kubenswrapper[5110]: I0126 00:13:25.046722 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"c1dd88ef-db06-41bc-8be3-53730c6fa57f","Type":"ContainerDied","Data":"f2dd2cf60790c8ca6797c6d356b6d64d749e265832151e737964ecb99253c2ae"} Jan 26 00:13:25 crc kubenswrapper[5110]: I0126 00:13:25.047104 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2dd2cf60790c8ca6797c6d356b6d64d749e265832151e737964ecb99253c2ae" Jan 26 00:13:25 crc kubenswrapper[5110]: I0126 00:13:25.047024 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:13:25 crc kubenswrapper[5110]: I0126 00:13:25.052765 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 26 00:13:25 crc kubenswrapper[5110]: I0126 00:13:25.056290 5110 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:25 crc kubenswrapper[5110]: I0126 00:13:25.056744 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29489760-jjbnv" Jan 26 00:13:25 crc kubenswrapper[5110]: I0126 00:13:25.057092 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29489760-jjbnv" event={"ID":"497eadd3-187e-4dfb-82c3-fad0d59eb723","Type":"ContainerDied","Data":"f2fab06e6d0b2fc11f06b3b87a963bf855598c797480bd56210cec0407b96dda"} Jan 26 00:13:25 crc kubenswrapper[5110]: I0126 00:13:25.057192 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2fab06e6d0b2fc11f06b3b87a963bf855598c797480bd56210cec0407b96dda" Jan 26 00:13:25 crc kubenswrapper[5110]: E0126 00:13:25.057179 5110 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.144:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:25 crc kubenswrapper[5110]: I0126 00:13:25.067339 5110 status_manager.go:895] "Failed to get status for pod" podUID="497eadd3-187e-4dfb-82c3-fad0d59eb723" pod="openshift-image-registry/image-pruner-29489760-jjbnv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-jjbnv\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:25 crc kubenswrapper[5110]: I0126 00:13:25.067743 5110 status_manager.go:895] "Failed to get status for pod" podUID="c1dd88ef-db06-41bc-8be3-53730c6fa57f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:25 crc kubenswrapper[5110]: I0126 00:13:25.075458 5110 status_manager.go:895] "Failed to get status for pod" podUID="c1dd88ef-db06-41bc-8be3-53730c6fa57f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:25 crc kubenswrapper[5110]: I0126 00:13:25.076176 5110 status_manager.go:895] "Failed to get status for pod" podUID="497eadd3-187e-4dfb-82c3-fad0d59eb723" pod="openshift-image-registry/image-pruner-29489760-jjbnv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-jjbnv\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:25 crc kubenswrapper[5110]: I0126 00:13:25.277251 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 26 00:13:25 crc kubenswrapper[5110]: I0126 00:13:25.278528 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:25 crc kubenswrapper[5110]: I0126 00:13:25.279570 5110 status_manager.go:895] "Failed to get status for pod" podUID="c1dd88ef-db06-41bc-8be3-53730c6fa57f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:25 crc kubenswrapper[5110]: I0126 00:13:25.279858 5110 status_manager.go:895] "Failed to get status for pod" podUID="497eadd3-187e-4dfb-82c3-fad0d59eb723" pod="openshift-image-registry/image-pruner-29489760-jjbnv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-jjbnv\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:25 crc kubenswrapper[5110]: I0126 00:13:25.280187 5110 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:25 crc kubenswrapper[5110]: I0126 00:13:25.375569 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 26 00:13:25 crc kubenswrapper[5110]: I0126 00:13:25.376185 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 26 00:13:25 crc kubenswrapper[5110]: I0126 00:13:25.376233 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 26 00:13:25 crc kubenswrapper[5110]: I0126 00:13:25.376253 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 26 00:13:25 crc kubenswrapper[5110]: I0126 00:13:25.376289 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 26 00:13:25 crc kubenswrapper[5110]: I0126 00:13:25.376370 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:13:25 crc kubenswrapper[5110]: I0126 00:13:25.376475 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:13:25 crc kubenswrapper[5110]: I0126 00:13:25.376508 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:13:25 crc kubenswrapper[5110]: I0126 00:13:25.376568 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:13:25 crc kubenswrapper[5110]: I0126 00:13:25.376904 5110 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:25 crc kubenswrapper[5110]: I0126 00:13:25.376918 5110 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:25 crc kubenswrapper[5110]: I0126 00:13:25.376928 5110 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:25 crc kubenswrapper[5110]: I0126 00:13:25.376937 5110 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:25 crc kubenswrapper[5110]: I0126 00:13:25.379434 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:13:25 crc kubenswrapper[5110]: I0126 00:13:25.478029 5110 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:26 crc kubenswrapper[5110]: I0126 00:13:26.067019 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 26 00:13:26 crc kubenswrapper[5110]: I0126 00:13:26.069325 5110 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="a67e63f2b2d217c10ec50b836c9cca85c134aef230c27dbb1ae8139a6dddd72b" exitCode=0 Jan 26 00:13:26 crc kubenswrapper[5110]: I0126 00:13:26.069525 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:26 crc kubenswrapper[5110]: I0126 00:13:26.069530 5110 scope.go:117] "RemoveContainer" containerID="ca0a9c3e32a579de04d542b7d2c66ce1ea575b229700d8fdae6cdb109b6ab46e" Jan 26 00:13:26 crc kubenswrapper[5110]: I0126 00:13:26.070393 5110 status_manager.go:895] "Failed to get status for pod" podUID="c1dd88ef-db06-41bc-8be3-53730c6fa57f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:26 crc kubenswrapper[5110]: I0126 00:13:26.070699 5110 status_manager.go:895] "Failed to get status for pod" podUID="497eadd3-187e-4dfb-82c3-fad0d59eb723" pod="openshift-image-registry/image-pruner-29489760-jjbnv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-jjbnv\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:26 crc kubenswrapper[5110]: I0126 00:13:26.071402 5110 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:26 crc kubenswrapper[5110]: I0126 00:13:26.088277 5110 status_manager.go:895] "Failed to get status for pod" podUID="497eadd3-187e-4dfb-82c3-fad0d59eb723" pod="openshift-image-registry/image-pruner-29489760-jjbnv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-jjbnv\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:26 crc kubenswrapper[5110]: I0126 00:13:26.088566 5110 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:26 crc kubenswrapper[5110]: I0126 00:13:26.089069 5110 status_manager.go:895] "Failed to get status for pod" podUID="c1dd88ef-db06-41bc-8be3-53730c6fa57f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:26 crc kubenswrapper[5110]: I0126 00:13:26.100633 5110 scope.go:117] "RemoveContainer" containerID="fc0c2347169345d3886bfc369c472fa0e2cf381c3b596a46b7de48d84e955504" Jan 26 00:13:26 crc kubenswrapper[5110]: I0126 00:13:26.123690 5110 scope.go:117] "RemoveContainer" containerID="edc58dbb97e7a54abf71095739d515edea8205bf2c5e4008b22a69d40b83cb2f" Jan 26 00:13:26 crc kubenswrapper[5110]: I0126 00:13:26.145863 5110 scope.go:117] "RemoveContainer" containerID="44222dc6a7066064989d8823f6c0f556f4d4d8cac2f2b4ab1462cbb51db8ff07" Jan 26 00:13:26 crc kubenswrapper[5110]: I0126 00:13:26.164860 5110 scope.go:117] "RemoveContainer" containerID="a67e63f2b2d217c10ec50b836c9cca85c134aef230c27dbb1ae8139a6dddd72b" Jan 26 00:13:26 crc kubenswrapper[5110]: I0126 00:13:26.184080 5110 scope.go:117] "RemoveContainer" containerID="2fe6116a5b9da02a5b57aaf57f7bed21247934453d936503dd835d1eac1c23b6" Jan 26 00:13:26 crc kubenswrapper[5110]: I0126 00:13:26.255065 5110 scope.go:117] "RemoveContainer" containerID="ca0a9c3e32a579de04d542b7d2c66ce1ea575b229700d8fdae6cdb109b6ab46e" Jan 26 00:13:26 crc kubenswrapper[5110]: E0126 00:13:26.256228 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca0a9c3e32a579de04d542b7d2c66ce1ea575b229700d8fdae6cdb109b6ab46e\": container with ID starting with ca0a9c3e32a579de04d542b7d2c66ce1ea575b229700d8fdae6cdb109b6ab46e not found: ID does not exist" containerID="ca0a9c3e32a579de04d542b7d2c66ce1ea575b229700d8fdae6cdb109b6ab46e" Jan 26 00:13:26 crc kubenswrapper[5110]: I0126 00:13:26.256298 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca0a9c3e32a579de04d542b7d2c66ce1ea575b229700d8fdae6cdb109b6ab46e"} err="failed to get container status \"ca0a9c3e32a579de04d542b7d2c66ce1ea575b229700d8fdae6cdb109b6ab46e\": rpc error: code = NotFound desc = could not find container \"ca0a9c3e32a579de04d542b7d2c66ce1ea575b229700d8fdae6cdb109b6ab46e\": container with ID starting with ca0a9c3e32a579de04d542b7d2c66ce1ea575b229700d8fdae6cdb109b6ab46e not found: ID does not exist" Jan 26 00:13:26 crc kubenswrapper[5110]: I0126 00:13:26.256343 5110 scope.go:117] "RemoveContainer" containerID="fc0c2347169345d3886bfc369c472fa0e2cf381c3b596a46b7de48d84e955504" Jan 26 00:13:26 crc kubenswrapper[5110]: E0126 00:13:26.256783 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc0c2347169345d3886bfc369c472fa0e2cf381c3b596a46b7de48d84e955504\": container with ID starting with fc0c2347169345d3886bfc369c472fa0e2cf381c3b596a46b7de48d84e955504 not found: ID does not exist" containerID="fc0c2347169345d3886bfc369c472fa0e2cf381c3b596a46b7de48d84e955504" Jan 26 00:13:26 crc kubenswrapper[5110]: I0126 00:13:26.256852 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc0c2347169345d3886bfc369c472fa0e2cf381c3b596a46b7de48d84e955504"} err="failed to get container status \"fc0c2347169345d3886bfc369c472fa0e2cf381c3b596a46b7de48d84e955504\": rpc error: code = NotFound desc = could not find container \"fc0c2347169345d3886bfc369c472fa0e2cf381c3b596a46b7de48d84e955504\": container with ID starting with fc0c2347169345d3886bfc369c472fa0e2cf381c3b596a46b7de48d84e955504 not found: ID does not exist" Jan 26 00:13:26 crc kubenswrapper[5110]: I0126 00:13:26.256885 5110 scope.go:117] "RemoveContainer" containerID="edc58dbb97e7a54abf71095739d515edea8205bf2c5e4008b22a69d40b83cb2f" Jan 26 00:13:26 crc kubenswrapper[5110]: E0126 00:13:26.257220 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edc58dbb97e7a54abf71095739d515edea8205bf2c5e4008b22a69d40b83cb2f\": container with ID starting with edc58dbb97e7a54abf71095739d515edea8205bf2c5e4008b22a69d40b83cb2f not found: ID does not exist" containerID="edc58dbb97e7a54abf71095739d515edea8205bf2c5e4008b22a69d40b83cb2f" Jan 26 00:13:26 crc kubenswrapper[5110]: I0126 00:13:26.257242 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edc58dbb97e7a54abf71095739d515edea8205bf2c5e4008b22a69d40b83cb2f"} err="failed to get container status \"edc58dbb97e7a54abf71095739d515edea8205bf2c5e4008b22a69d40b83cb2f\": rpc error: code = NotFound desc = could not find container \"edc58dbb97e7a54abf71095739d515edea8205bf2c5e4008b22a69d40b83cb2f\": container with ID starting with edc58dbb97e7a54abf71095739d515edea8205bf2c5e4008b22a69d40b83cb2f not found: ID does not exist" Jan 26 00:13:26 crc kubenswrapper[5110]: I0126 00:13:26.257260 5110 scope.go:117] "RemoveContainer" containerID="44222dc6a7066064989d8823f6c0f556f4d4d8cac2f2b4ab1462cbb51db8ff07" Jan 26 00:13:26 crc kubenswrapper[5110]: E0126 00:13:26.257465 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44222dc6a7066064989d8823f6c0f556f4d4d8cac2f2b4ab1462cbb51db8ff07\": container with ID starting with 44222dc6a7066064989d8823f6c0f556f4d4d8cac2f2b4ab1462cbb51db8ff07 not found: ID does not exist" containerID="44222dc6a7066064989d8823f6c0f556f4d4d8cac2f2b4ab1462cbb51db8ff07" Jan 26 00:13:26 crc kubenswrapper[5110]: I0126 00:13:26.257488 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44222dc6a7066064989d8823f6c0f556f4d4d8cac2f2b4ab1462cbb51db8ff07"} err="failed to get container status \"44222dc6a7066064989d8823f6c0f556f4d4d8cac2f2b4ab1462cbb51db8ff07\": rpc error: code = NotFound desc = could not find container \"44222dc6a7066064989d8823f6c0f556f4d4d8cac2f2b4ab1462cbb51db8ff07\": container with ID starting with 44222dc6a7066064989d8823f6c0f556f4d4d8cac2f2b4ab1462cbb51db8ff07 not found: ID does not exist" Jan 26 00:13:26 crc kubenswrapper[5110]: I0126 00:13:26.257501 5110 scope.go:117] "RemoveContainer" containerID="a67e63f2b2d217c10ec50b836c9cca85c134aef230c27dbb1ae8139a6dddd72b" Jan 26 00:13:26 crc kubenswrapper[5110]: E0126 00:13:26.258135 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a67e63f2b2d217c10ec50b836c9cca85c134aef230c27dbb1ae8139a6dddd72b\": container with ID starting with a67e63f2b2d217c10ec50b836c9cca85c134aef230c27dbb1ae8139a6dddd72b not found: ID does not exist" containerID="a67e63f2b2d217c10ec50b836c9cca85c134aef230c27dbb1ae8139a6dddd72b" Jan 26 00:13:26 crc kubenswrapper[5110]: I0126 00:13:26.258824 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a67e63f2b2d217c10ec50b836c9cca85c134aef230c27dbb1ae8139a6dddd72b"} err="failed to get container status \"a67e63f2b2d217c10ec50b836c9cca85c134aef230c27dbb1ae8139a6dddd72b\": rpc error: code = NotFound desc = could not find container \"a67e63f2b2d217c10ec50b836c9cca85c134aef230c27dbb1ae8139a6dddd72b\": container with ID starting with a67e63f2b2d217c10ec50b836c9cca85c134aef230c27dbb1ae8139a6dddd72b not found: ID does not exist" Jan 26 00:13:26 crc kubenswrapper[5110]: I0126 00:13:26.258984 5110 scope.go:117] "RemoveContainer" containerID="2fe6116a5b9da02a5b57aaf57f7bed21247934453d936503dd835d1eac1c23b6" Jan 26 00:13:26 crc kubenswrapper[5110]: E0126 00:13:26.260053 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fe6116a5b9da02a5b57aaf57f7bed21247934453d936503dd835d1eac1c23b6\": container with ID starting with 2fe6116a5b9da02a5b57aaf57f7bed21247934453d936503dd835d1eac1c23b6 not found: ID does not exist" containerID="2fe6116a5b9da02a5b57aaf57f7bed21247934453d936503dd835d1eac1c23b6" Jan 26 00:13:26 crc kubenswrapper[5110]: I0126 00:13:26.260087 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fe6116a5b9da02a5b57aaf57f7bed21247934453d936503dd835d1eac1c23b6"} err="failed to get container status \"2fe6116a5b9da02a5b57aaf57f7bed21247934453d936503dd835d1eac1c23b6\": rpc error: code = NotFound desc = could not find container \"2fe6116a5b9da02a5b57aaf57f7bed21247934453d936503dd835d1eac1c23b6\": container with ID starting with 2fe6116a5b9da02a5b57aaf57f7bed21247934453d936503dd835d1eac1c23b6 not found: ID does not exist" Jan 26 00:13:26 crc kubenswrapper[5110]: I0126 00:13:26.813255 5110 patch_prober.go:28] interesting pod/machine-config-daemon-c6tpr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:13:26 crc kubenswrapper[5110]: I0126 00:13:26.813360 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" podUID="f15bed73-d669-439f-9828-7b952d9bfe65" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:13:27 crc kubenswrapper[5110]: I0126 00:13:27.328890 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Jan 26 00:13:28 crc kubenswrapper[5110]: E0126 00:13:28.897438 5110 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.144:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e1f8f26fd0936 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:13:23.298085174 +0000 UTC m=+260.526983783,LastTimestamp:2026-01-26 00:13:23.298085174 +0000 UTC m=+260.526983783,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:13:30 crc kubenswrapper[5110]: E0126 00:13:30.106494 5110 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:30 crc kubenswrapper[5110]: E0126 00:13:30.107213 5110 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:30 crc kubenswrapper[5110]: E0126 00:13:30.107517 5110 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:30 crc kubenswrapper[5110]: E0126 00:13:30.107746 5110 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:30 crc kubenswrapper[5110]: E0126 00:13:30.107997 5110 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:30 crc kubenswrapper[5110]: I0126 00:13:30.108029 5110 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 26 00:13:30 crc kubenswrapper[5110]: E0126 00:13:30.108219 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="200ms" Jan 26 00:13:30 crc kubenswrapper[5110]: E0126 00:13:30.309537 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="400ms" Jan 26 00:13:30 crc kubenswrapper[5110]: E0126 00:13:30.711344 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="800ms" Jan 26 00:13:31 crc kubenswrapper[5110]: E0126 00:13:31.513307 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="1.6s" Jan 26 00:13:33 crc kubenswrapper[5110]: E0126 00:13:33.114533 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="3.2s" Jan 26 00:13:33 crc kubenswrapper[5110]: I0126 00:13:33.320959 5110 status_manager.go:895] "Failed to get status for pod" podUID="497eadd3-187e-4dfb-82c3-fad0d59eb723" pod="openshift-image-registry/image-pruner-29489760-jjbnv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-jjbnv\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:33 crc kubenswrapper[5110]: I0126 00:13:33.321441 5110 status_manager.go:895] "Failed to get status for pod" podUID="c1dd88ef-db06-41bc-8be3-53730c6fa57f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:36 crc kubenswrapper[5110]: E0126 00:13:36.315572 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="6.4s" Jan 26 00:13:36 crc kubenswrapper[5110]: I0126 00:13:36.381103 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 26 00:13:36 crc kubenswrapper[5110]: I0126 00:13:36.381181 5110 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="201fbacf5107a565f88c0b909a2d52c716cda4ea473eb26d2c613fe248ecf434" exitCode=1 Jan 26 00:13:36 crc kubenswrapper[5110]: I0126 00:13:36.381275 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"201fbacf5107a565f88c0b909a2d52c716cda4ea473eb26d2c613fe248ecf434"} Jan 26 00:13:36 crc kubenswrapper[5110]: I0126 00:13:36.382046 5110 scope.go:117] "RemoveContainer" containerID="201fbacf5107a565f88c0b909a2d52c716cda4ea473eb26d2c613fe248ecf434" Jan 26 00:13:36 crc kubenswrapper[5110]: I0126 00:13:36.382696 5110 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:36 crc kubenswrapper[5110]: I0126 00:13:36.383587 5110 status_manager.go:895] "Failed to get status for pod" podUID="497eadd3-187e-4dfb-82c3-fad0d59eb723" pod="openshift-image-registry/image-pruner-29489760-jjbnv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-jjbnv\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:36 crc kubenswrapper[5110]: I0126 00:13:36.384179 5110 status_manager.go:895] "Failed to get status for pod" podUID="c1dd88ef-db06-41bc-8be3-53730c6fa57f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:37 crc kubenswrapper[5110]: I0126 00:13:37.173946 5110 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:13:37 crc kubenswrapper[5110]: I0126 00:13:37.317099 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:37 crc kubenswrapper[5110]: I0126 00:13:37.318568 5110 status_manager.go:895] "Failed to get status for pod" podUID="c1dd88ef-db06-41bc-8be3-53730c6fa57f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:37 crc kubenswrapper[5110]: I0126 00:13:37.319944 5110 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:37 crc kubenswrapper[5110]: I0126 00:13:37.321143 5110 status_manager.go:895] "Failed to get status for pod" podUID="497eadd3-187e-4dfb-82c3-fad0d59eb723" pod="openshift-image-registry/image-pruner-29489760-jjbnv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-jjbnv\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:37 crc kubenswrapper[5110]: I0126 00:13:37.343524 5110 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6bf35b8e-a6c5-479a-860f-db0308fb993b" Jan 26 00:13:37 crc kubenswrapper[5110]: I0126 00:13:37.343576 5110 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6bf35b8e-a6c5-479a-860f-db0308fb993b" Jan 26 00:13:37 crc kubenswrapper[5110]: E0126 00:13:37.344583 5110 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:37 crc kubenswrapper[5110]: I0126 00:13:37.345308 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:37 crc kubenswrapper[5110]: W0126 00:13:37.375346 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57755cc5f99000cc11e193051474d4e2.slice/crio-27df2bbf7616188b73d4eb68eeea6a3821c469e4af02b2494962d1a3edaf3520 WatchSource:0}: Error finding container 27df2bbf7616188b73d4eb68eeea6a3821c469e4af02b2494962d1a3edaf3520: Status 404 returned error can't find the container with id 27df2bbf7616188b73d4eb68eeea6a3821c469e4af02b2494962d1a3edaf3520 Jan 26 00:13:37 crc kubenswrapper[5110]: I0126 00:13:37.391274 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"27df2bbf7616188b73d4eb68eeea6a3821c469e4af02b2494962d1a3edaf3520"} Jan 26 00:13:37 crc kubenswrapper[5110]: I0126 00:13:37.395551 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 26 00:13:37 crc kubenswrapper[5110]: I0126 00:13:37.395722 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"5cb8582ca837c8461488d8ba1846f15ded0ac3505a57e90bb5b1977f2dcff6d1"} Jan 26 00:13:37 crc kubenswrapper[5110]: I0126 00:13:37.399228 5110 status_manager.go:895] "Failed to get status for pod" podUID="c1dd88ef-db06-41bc-8be3-53730c6fa57f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:37 crc kubenswrapper[5110]: I0126 00:13:37.399989 5110 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:37 crc kubenswrapper[5110]: I0126 00:13:37.400599 5110 status_manager.go:895] "Failed to get status for pod" podUID="497eadd3-187e-4dfb-82c3-fad0d59eb723" pod="openshift-image-registry/image-pruner-29489760-jjbnv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-jjbnv\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:38 crc kubenswrapper[5110]: I0126 00:13:38.407211 5110 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="419be6ac604a89ff1368471839f1aa7efb6198ad5cad78428a73b6088aeb603f" exitCode=0 Jan 26 00:13:38 crc kubenswrapper[5110]: I0126 00:13:38.407302 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"419be6ac604a89ff1368471839f1aa7efb6198ad5cad78428a73b6088aeb603f"} Jan 26 00:13:38 crc kubenswrapper[5110]: I0126 00:13:38.407592 5110 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6bf35b8e-a6c5-479a-860f-db0308fb993b" Jan 26 00:13:38 crc kubenswrapper[5110]: I0126 00:13:38.407613 5110 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6bf35b8e-a6c5-479a-860f-db0308fb993b" Jan 26 00:13:38 crc kubenswrapper[5110]: E0126 00:13:38.408307 5110 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:38 crc kubenswrapper[5110]: I0126 00:13:38.408430 5110 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:38 crc kubenswrapper[5110]: I0126 00:13:38.409897 5110 status_manager.go:895] "Failed to get status for pod" podUID="497eadd3-187e-4dfb-82c3-fad0d59eb723" pod="openshift-image-registry/image-pruner-29489760-jjbnv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-jjbnv\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:38 crc kubenswrapper[5110]: I0126 00:13:38.410227 5110 status_manager.go:895] "Failed to get status for pod" podUID="c1dd88ef-db06-41bc-8be3-53730c6fa57f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 26 00:13:38 crc kubenswrapper[5110]: I0126 00:13:38.696094 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:13:38 crc kubenswrapper[5110]: I0126 00:13:38.696129 5110 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 26 00:13:38 crc kubenswrapper[5110]: I0126 00:13:38.696524 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 26 00:13:39 crc kubenswrapper[5110]: I0126 00:13:39.419593 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"6478b3f02ae7e26a252bad2a09d943bc0acf47db8e9e82c3a0927fb4c134be95"} Jan 26 00:13:39 crc kubenswrapper[5110]: I0126 00:13:39.419670 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"1dd62c937695638766a4bb0ba01dbb2c94ea8d3f581a0e1c6dde4935acb76c47"} Jan 26 00:13:39 crc kubenswrapper[5110]: I0126 00:13:39.419687 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"5c356146a93afbe665bbc5ee7d153073e8d7669f8a380906798cd1a8ddc5a57d"} Jan 26 00:13:40 crc kubenswrapper[5110]: I0126 00:13:40.433007 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"9cb6774f75ea21fcc9a7bb562ddf6456fcfdb8eee5e5590ee12c763b936d702f"} Jan 26 00:13:40 crc kubenswrapper[5110]: I0126 00:13:40.433079 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"6606b2ddaf8775bd809424ce8d8a7120a20f02495ab0a8f3107429130611badf"} Jan 26 00:13:40 crc kubenswrapper[5110]: I0126 00:13:40.433442 5110 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6bf35b8e-a6c5-479a-860f-db0308fb993b" Jan 26 00:13:40 crc kubenswrapper[5110]: I0126 00:13:40.433466 5110 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6bf35b8e-a6c5-479a-860f-db0308fb993b" Jan 26 00:13:40 crc kubenswrapper[5110]: I0126 00:13:40.433546 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.303258 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" podUID="b4b9ee78-19a7-41bf-97e4-9ab13bad2730" containerName="oauth-openshift" containerID="cri-o://95597becabb11d191639e23ff0da6faf49de134608802bdcff1e857e652878e3" gracePeriod=15 Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.444428 5110 generic.go:358] "Generic (PLEG): container finished" podID="b4b9ee78-19a7-41bf-97e4-9ab13bad2730" containerID="95597becabb11d191639e23ff0da6faf49de134608802bdcff1e857e652878e3" exitCode=0 Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.444521 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" event={"ID":"b4b9ee78-19a7-41bf-97e4-9ab13bad2730","Type":"ContainerDied","Data":"95597becabb11d191639e23ff0da6faf49de134608802bdcff1e857e652878e3"} Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.755282 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.851780 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-user-template-provider-selection\") pod \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.852276 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-audit-policies\") pod \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.852323 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-user-template-error\") pod \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.852374 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-serving-cert\") pod \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.852402 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-audit-dir\") pod \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.852456 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-service-ca\") pod \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.852488 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhsv6\" (UniqueName: \"kubernetes.io/projected/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-kube-api-access-fhsv6\") pod \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.852534 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-cliconfig\") pod \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.852565 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-router-certs\") pod \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.852589 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-trusted-ca-bundle\") pod \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.852647 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-user-idp-0-file-data\") pod \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.852678 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-session\") pod \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.852737 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-user-template-login\") pod \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.852831 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-ocp-branding-template\") pod \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\" (UID: \"b4b9ee78-19a7-41bf-97e4-9ab13bad2730\") " Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.854364 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "b4b9ee78-19a7-41bf-97e4-9ab13bad2730" (UID: "b4b9ee78-19a7-41bf-97e4-9ab13bad2730"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.854412 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "b4b9ee78-19a7-41bf-97e4-9ab13bad2730" (UID: "b4b9ee78-19a7-41bf-97e4-9ab13bad2730"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.854472 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "b4b9ee78-19a7-41bf-97e4-9ab13bad2730" (UID: "b4b9ee78-19a7-41bf-97e4-9ab13bad2730"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.857169 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "b4b9ee78-19a7-41bf-97e4-9ab13bad2730" (UID: "b4b9ee78-19a7-41bf-97e4-9ab13bad2730"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.858853 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "b4b9ee78-19a7-41bf-97e4-9ab13bad2730" (UID: "b4b9ee78-19a7-41bf-97e4-9ab13bad2730"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.862457 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "b4b9ee78-19a7-41bf-97e4-9ab13bad2730" (UID: "b4b9ee78-19a7-41bf-97e4-9ab13bad2730"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.862854 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "b4b9ee78-19a7-41bf-97e4-9ab13bad2730" (UID: "b4b9ee78-19a7-41bf-97e4-9ab13bad2730"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.865368 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "b4b9ee78-19a7-41bf-97e4-9ab13bad2730" (UID: "b4b9ee78-19a7-41bf-97e4-9ab13bad2730"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.865865 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "b4b9ee78-19a7-41bf-97e4-9ab13bad2730" (UID: "b4b9ee78-19a7-41bf-97e4-9ab13bad2730"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.867212 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "b4b9ee78-19a7-41bf-97e4-9ab13bad2730" (UID: "b4b9ee78-19a7-41bf-97e4-9ab13bad2730"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.867549 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "b4b9ee78-19a7-41bf-97e4-9ab13bad2730" (UID: "b4b9ee78-19a7-41bf-97e4-9ab13bad2730"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.867908 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "b4b9ee78-19a7-41bf-97e4-9ab13bad2730" (UID: "b4b9ee78-19a7-41bf-97e4-9ab13bad2730"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.869406 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "b4b9ee78-19a7-41bf-97e4-9ab13bad2730" (UID: "b4b9ee78-19a7-41bf-97e4-9ab13bad2730"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.879857 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-kube-api-access-fhsv6" (OuterVolumeSpecName: "kube-api-access-fhsv6") pod "b4b9ee78-19a7-41bf-97e4-9ab13bad2730" (UID: "b4b9ee78-19a7-41bf-97e4-9ab13bad2730"). InnerVolumeSpecName "kube-api-access-fhsv6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.954594 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fhsv6\" (UniqueName: \"kubernetes.io/projected/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-kube-api-access-fhsv6\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.954672 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.954688 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.954700 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.954711 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.954720 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.954730 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.954740 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.954750 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.954764 5110 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.954782 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.954807 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.954819 5110 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:41 crc kubenswrapper[5110]: I0126 00:13:41.954829 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b4b9ee78-19a7-41bf-97e4-9ab13bad2730-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:42 crc kubenswrapper[5110]: I0126 00:13:42.346305 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:42 crc kubenswrapper[5110]: I0126 00:13:42.346378 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:42 crc kubenswrapper[5110]: I0126 00:13:42.356624 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:42 crc kubenswrapper[5110]: I0126 00:13:42.476569 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" event={"ID":"b4b9ee78-19a7-41bf-97e4-9ab13bad2730","Type":"ContainerDied","Data":"25de0b13940cacbf707b15fc9d23a9accb6bae86aaa0bb0b92de6c4367344819"} Jan 26 00:13:42 crc kubenswrapper[5110]: I0126 00:13:42.476664 5110 scope.go:117] "RemoveContainer" containerID="95597becabb11d191639e23ff0da6faf49de134608802bdcff1e857e652878e3" Jan 26 00:13:42 crc kubenswrapper[5110]: I0126 00:13:42.476610 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-tg86c" Jan 26 00:13:42 crc kubenswrapper[5110]: I0126 00:13:42.559172 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:13:45 crc kubenswrapper[5110]: I0126 00:13:45.666304 5110 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:45 crc kubenswrapper[5110]: I0126 00:13:45.666878 5110 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:45 crc kubenswrapper[5110]: I0126 00:13:45.832549 5110 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="8439dc33-8c9c-42b4-a0fd-a2063a9854e7" Jan 26 00:13:45 crc kubenswrapper[5110]: E0126 00:13:45.882894 5110 reflector.go:200] "Failed to watch" err="configmaps \"v4-0-config-system-trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" type="*v1.ConfigMap" Jan 26 00:13:46 crc kubenswrapper[5110]: E0126 00:13:46.012596 5110 reflector.go:200] "Failed to watch" err="secrets \"v4-0-config-system-ocp-branding-template\" is forbidden: User \"system:node:crc\" cannot watch resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" type="*v1.Secret" Jan 26 00:13:46 crc kubenswrapper[5110]: E0126 00:13:46.034400 5110 reflector.go:200] "Failed to watch" err="secrets \"v4-0-config-system-session\" is forbidden: User \"system:node:crc\" cannot watch resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" type="*v1.Secret" Jan 26 00:13:46 crc kubenswrapper[5110]: E0126 00:13:46.159944 5110 reflector.go:200] "Failed to watch" err="secrets \"oauth-openshift-dockercfg-d2bf2\" is forbidden: User \"system:node:crc\" cannot watch resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" type="*v1.Secret" Jan 26 00:13:46 crc kubenswrapper[5110]: E0126 00:13:46.263784 5110 reflector.go:200] "Failed to watch" err="secrets \"v4-0-config-system-router-certs\" is forbidden: User \"system:node:crc\" cannot watch resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" type="*v1.Secret" Jan 26 00:13:46 crc kubenswrapper[5110]: I0126 00:13:46.519497 5110 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6bf35b8e-a6c5-479a-860f-db0308fb993b" Jan 26 00:13:46 crc kubenswrapper[5110]: I0126 00:13:46.519538 5110 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6bf35b8e-a6c5-479a-860f-db0308fb993b" Jan 26 00:13:46 crc kubenswrapper[5110]: I0126 00:13:46.524194 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:46 crc kubenswrapper[5110]: I0126 00:13:46.524345 5110 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="8439dc33-8c9c-42b4-a0fd-a2063a9854e7" Jan 26 00:13:47 crc kubenswrapper[5110]: I0126 00:13:47.527780 5110 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6bf35b8e-a6c5-479a-860f-db0308fb993b" Jan 26 00:13:47 crc kubenswrapper[5110]: I0126 00:13:47.527836 5110 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6bf35b8e-a6c5-479a-860f-db0308fb993b" Jan 26 00:13:47 crc kubenswrapper[5110]: I0126 00:13:47.533301 5110 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="8439dc33-8c9c-42b4-a0fd-a2063a9854e7" Jan 26 00:13:48 crc kubenswrapper[5110]: I0126 00:13:48.696695 5110 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 26 00:13:48 crc kubenswrapper[5110]: I0126 00:13:48.697454 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5110]: I0126 00:13:55.911093 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 26 00:13:56 crc kubenswrapper[5110]: I0126 00:13:56.250405 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:13:56 crc kubenswrapper[5110]: I0126 00:13:56.571103 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 26 00:13:56 crc kubenswrapper[5110]: I0126 00:13:56.593514 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 26 00:13:56 crc kubenswrapper[5110]: I0126 00:13:56.635234 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 26 00:13:56 crc kubenswrapper[5110]: I0126 00:13:56.743695 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 26 00:13:56 crc kubenswrapper[5110]: I0126 00:13:56.813236 5110 patch_prober.go:28] interesting pod/machine-config-daemon-c6tpr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:13:56 crc kubenswrapper[5110]: I0126 00:13:56.813348 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" podUID="f15bed73-d669-439f-9828-7b952d9bfe65" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:13:56 crc kubenswrapper[5110]: I0126 00:13:56.813432 5110 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" Jan 26 00:13:56 crc kubenswrapper[5110]: I0126 00:13:56.814090 5110 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8d9183336fcd7f82da30e7c47ac4c55b5870284a52f6c97a6046736bea666e8f"} pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 00:13:56 crc kubenswrapper[5110]: I0126 00:13:56.814233 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" podUID="f15bed73-d669-439f-9828-7b952d9bfe65" containerName="machine-config-daemon" containerID="cri-o://8d9183336fcd7f82da30e7c47ac4c55b5870284a52f6c97a6046736bea666e8f" gracePeriod=600 Jan 26 00:13:57 crc kubenswrapper[5110]: I0126 00:13:57.089760 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 26 00:13:57 crc kubenswrapper[5110]: I0126 00:13:57.110957 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 26 00:13:57 crc kubenswrapper[5110]: I0126 00:13:57.261142 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 26 00:13:57 crc kubenswrapper[5110]: I0126 00:13:57.535411 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 26 00:13:57 crc kubenswrapper[5110]: I0126 00:13:57.592593 5110 generic.go:358] "Generic (PLEG): container finished" podID="f15bed73-d669-439f-9828-7b952d9bfe65" containerID="8d9183336fcd7f82da30e7c47ac4c55b5870284a52f6c97a6046736bea666e8f" exitCode=0 Jan 26 00:13:57 crc kubenswrapper[5110]: I0126 00:13:57.592646 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" event={"ID":"f15bed73-d669-439f-9828-7b952d9bfe65","Type":"ContainerDied","Data":"8d9183336fcd7f82da30e7c47ac4c55b5870284a52f6c97a6046736bea666e8f"} Jan 26 00:13:57 crc kubenswrapper[5110]: I0126 00:13:57.592676 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" event={"ID":"f15bed73-d669-439f-9828-7b952d9bfe65","Type":"ContainerStarted","Data":"4060a0b2df1e9f152ebc91345ed0924c6529a54f677fc892640b96180c61050a"} Jan 26 00:13:57 crc kubenswrapper[5110]: I0126 00:13:57.959344 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 26 00:13:58 crc kubenswrapper[5110]: I0126 00:13:58.471503 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 26 00:13:58 crc kubenswrapper[5110]: I0126 00:13:58.489862 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 26 00:13:58 crc kubenswrapper[5110]: I0126 00:13:58.514254 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 26 00:13:58 crc kubenswrapper[5110]: I0126 00:13:58.651204 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 26 00:13:58 crc kubenswrapper[5110]: I0126 00:13:58.655536 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 26 00:13:58 crc kubenswrapper[5110]: I0126 00:13:58.658009 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 26 00:13:58 crc kubenswrapper[5110]: I0126 00:13:58.696292 5110 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 26 00:13:58 crc kubenswrapper[5110]: I0126 00:13:58.696386 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 26 00:13:58 crc kubenswrapper[5110]: I0126 00:13:58.696479 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:13:58 crc kubenswrapper[5110]: I0126 00:13:58.697577 5110 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"5cb8582ca837c8461488d8ba1846f15ded0ac3505a57e90bb5b1977f2dcff6d1"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 26 00:13:58 crc kubenswrapper[5110]: I0126 00:13:58.697688 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" containerID="cri-o://5cb8582ca837c8461488d8ba1846f15ded0ac3505a57e90bb5b1977f2dcff6d1" gracePeriod=30 Jan 26 00:13:58 crc kubenswrapper[5110]: I0126 00:13:58.698192 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 26 00:13:58 crc kubenswrapper[5110]: I0126 00:13:58.737392 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 26 00:13:58 crc kubenswrapper[5110]: I0126 00:13:58.829022 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 26 00:13:58 crc kubenswrapper[5110]: I0126 00:13:58.848603 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 26 00:13:58 crc kubenswrapper[5110]: I0126 00:13:58.852716 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 26 00:13:58 crc kubenswrapper[5110]: I0126 00:13:58.864485 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 26 00:13:58 crc kubenswrapper[5110]: I0126 00:13:58.923912 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 26 00:13:58 crc kubenswrapper[5110]: I0126 00:13:58.934630 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 26 00:13:58 crc kubenswrapper[5110]: I0126 00:13:58.954341 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 26 00:13:59 crc kubenswrapper[5110]: I0126 00:13:59.042431 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 26 00:13:59 crc kubenswrapper[5110]: I0126 00:13:59.119204 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 26 00:13:59 crc kubenswrapper[5110]: I0126 00:13:59.255345 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 26 00:13:59 crc kubenswrapper[5110]: I0126 00:13:59.303009 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 26 00:13:59 crc kubenswrapper[5110]: I0126 00:13:59.317136 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 26 00:13:59 crc kubenswrapper[5110]: I0126 00:13:59.469075 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 26 00:13:59 crc kubenswrapper[5110]: I0126 00:13:59.705108 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 26 00:13:59 crc kubenswrapper[5110]: I0126 00:13:59.715060 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 26 00:13:59 crc kubenswrapper[5110]: I0126 00:13:59.737981 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 26 00:13:59 crc kubenswrapper[5110]: I0126 00:13:59.750500 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 26 00:13:59 crc kubenswrapper[5110]: I0126 00:13:59.770021 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 26 00:13:59 crc kubenswrapper[5110]: I0126 00:13:59.774516 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 26 00:13:59 crc kubenswrapper[5110]: I0126 00:13:59.812684 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 26 00:13:59 crc kubenswrapper[5110]: I0126 00:13:59.816053 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 26 00:13:59 crc kubenswrapper[5110]: I0126 00:13:59.926232 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 26 00:14:00 crc kubenswrapper[5110]: I0126 00:14:00.017738 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 26 00:14:00 crc kubenswrapper[5110]: I0126 00:14:00.085147 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 26 00:14:00 crc kubenswrapper[5110]: I0126 00:14:00.356407 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 26 00:14:00 crc kubenswrapper[5110]: I0126 00:14:00.390416 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 26 00:14:00 crc kubenswrapper[5110]: I0126 00:14:00.422522 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 26 00:14:00 crc kubenswrapper[5110]: I0126 00:14:00.444387 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 26 00:14:00 crc kubenswrapper[5110]: I0126 00:14:00.512723 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 26 00:14:00 crc kubenswrapper[5110]: I0126 00:14:00.527436 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 26 00:14:00 crc kubenswrapper[5110]: I0126 00:14:00.544722 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 26 00:14:00 crc kubenswrapper[5110]: I0126 00:14:00.564133 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 26 00:14:00 crc kubenswrapper[5110]: I0126 00:14:00.755295 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 26 00:14:00 crc kubenswrapper[5110]: I0126 00:14:00.759338 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 26 00:14:00 crc kubenswrapper[5110]: I0126 00:14:00.876864 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 26 00:14:00 crc kubenswrapper[5110]: I0126 00:14:00.958118 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 26 00:14:00 crc kubenswrapper[5110]: I0126 00:14:00.982046 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 26 00:14:01 crc kubenswrapper[5110]: I0126 00:14:01.040862 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 26 00:14:01 crc kubenswrapper[5110]: I0126 00:14:01.066829 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 26 00:14:01 crc kubenswrapper[5110]: I0126 00:14:01.067872 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 26 00:14:01 crc kubenswrapper[5110]: I0126 00:14:01.150115 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 26 00:14:01 crc kubenswrapper[5110]: I0126 00:14:01.176016 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 26 00:14:01 crc kubenswrapper[5110]: I0126 00:14:01.199543 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 26 00:14:01 crc kubenswrapper[5110]: I0126 00:14:01.244149 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 26 00:14:01 crc kubenswrapper[5110]: I0126 00:14:01.337260 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:14:01 crc kubenswrapper[5110]: I0126 00:14:01.521021 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 26 00:14:01 crc kubenswrapper[5110]: I0126 00:14:01.552150 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 26 00:14:01 crc kubenswrapper[5110]: I0126 00:14:01.569893 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 26 00:14:01 crc kubenswrapper[5110]: I0126 00:14:01.592561 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 26 00:14:01 crc kubenswrapper[5110]: I0126 00:14:01.627500 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 26 00:14:01 crc kubenswrapper[5110]: I0126 00:14:01.686164 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 26 00:14:01 crc kubenswrapper[5110]: I0126 00:14:01.776737 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 26 00:14:01 crc kubenswrapper[5110]: I0126 00:14:01.855931 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 26 00:14:01 crc kubenswrapper[5110]: I0126 00:14:01.875758 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 26 00:14:01 crc kubenswrapper[5110]: I0126 00:14:01.895936 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 26 00:14:01 crc kubenswrapper[5110]: I0126 00:14:01.956084 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 26 00:14:01 crc kubenswrapper[5110]: I0126 00:14:01.987926 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 26 00:14:02 crc kubenswrapper[5110]: I0126 00:14:02.008895 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 26 00:14:02 crc kubenswrapper[5110]: I0126 00:14:02.072290 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 26 00:14:02 crc kubenswrapper[5110]: I0126 00:14:02.116398 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 26 00:14:02 crc kubenswrapper[5110]: I0126 00:14:02.222966 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 26 00:14:02 crc kubenswrapper[5110]: I0126 00:14:02.236519 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 26 00:14:02 crc kubenswrapper[5110]: I0126 00:14:02.286328 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 26 00:14:02 crc kubenswrapper[5110]: I0126 00:14:02.337186 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 26 00:14:02 crc kubenswrapper[5110]: I0126 00:14:02.362391 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:14:02 crc kubenswrapper[5110]: I0126 00:14:02.431263 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 26 00:14:02 crc kubenswrapper[5110]: I0126 00:14:02.443870 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 26 00:14:02 crc kubenswrapper[5110]: I0126 00:14:02.459867 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 26 00:14:02 crc kubenswrapper[5110]: I0126 00:14:02.486836 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 26 00:14:02 crc kubenswrapper[5110]: I0126 00:14:02.597038 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 26 00:14:02 crc kubenswrapper[5110]: I0126 00:14:02.629068 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 26 00:14:02 crc kubenswrapper[5110]: I0126 00:14:02.776930 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 26 00:14:02 crc kubenswrapper[5110]: I0126 00:14:02.859977 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 26 00:14:02 crc kubenswrapper[5110]: I0126 00:14:02.867335 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 26 00:14:02 crc kubenswrapper[5110]: I0126 00:14:02.888610 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:14:02 crc kubenswrapper[5110]: I0126 00:14:02.928500 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 26 00:14:02 crc kubenswrapper[5110]: I0126 00:14:02.928560 5110 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 26 00:14:02 crc kubenswrapper[5110]: I0126 00:14:02.981369 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 26 00:14:02 crc kubenswrapper[5110]: I0126 00:14:02.992129 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 26 00:14:03 crc kubenswrapper[5110]: I0126 00:14:03.044563 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 26 00:14:03 crc kubenswrapper[5110]: I0126 00:14:03.088033 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 26 00:14:03 crc kubenswrapper[5110]: I0126 00:14:03.131026 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 26 00:14:03 crc kubenswrapper[5110]: I0126 00:14:03.173432 5110 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 26 00:14:03 crc kubenswrapper[5110]: I0126 00:14:03.192914 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 26 00:14:03 crc kubenswrapper[5110]: I0126 00:14:03.270689 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 26 00:14:03 crc kubenswrapper[5110]: I0126 00:14:03.347625 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 26 00:14:03 crc kubenswrapper[5110]: I0126 00:14:03.384354 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 26 00:14:03 crc kubenswrapper[5110]: I0126 00:14:03.406619 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 26 00:14:03 crc kubenswrapper[5110]: I0126 00:14:03.408966 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 26 00:14:03 crc kubenswrapper[5110]: I0126 00:14:03.430161 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 26 00:14:03 crc kubenswrapper[5110]: I0126 00:14:03.476024 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 26 00:14:03 crc kubenswrapper[5110]: I0126 00:14:03.476053 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 26 00:14:03 crc kubenswrapper[5110]: I0126 00:14:03.526250 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 26 00:14:03 crc kubenswrapper[5110]: I0126 00:14:03.549652 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 26 00:14:03 crc kubenswrapper[5110]: I0126 00:14:03.617096 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 26 00:14:03 crc kubenswrapper[5110]: I0126 00:14:03.621548 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 26 00:14:03 crc kubenswrapper[5110]: I0126 00:14:03.621717 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 26 00:14:03 crc kubenswrapper[5110]: I0126 00:14:03.640024 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 26 00:14:03 crc kubenswrapper[5110]: I0126 00:14:03.649723 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 26 00:14:03 crc kubenswrapper[5110]: I0126 00:14:03.708667 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 26 00:14:03 crc kubenswrapper[5110]: I0126 00:14:03.754884 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 26 00:14:03 crc kubenswrapper[5110]: I0126 00:14:03.837534 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 26 00:14:03 crc kubenswrapper[5110]: I0126 00:14:03.856393 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 26 00:14:03 crc kubenswrapper[5110]: I0126 00:14:03.903776 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 26 00:14:03 crc kubenswrapper[5110]: I0126 00:14:03.907764 5110 ???:1] "http: TLS handshake error from 192.168.126.11:51680: no serving certificate available for the kubelet" Jan 26 00:14:03 crc kubenswrapper[5110]: I0126 00:14:03.932788 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 26 00:14:04 crc kubenswrapper[5110]: I0126 00:14:04.165290 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 26 00:14:04 crc kubenswrapper[5110]: I0126 00:14:04.292779 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 26 00:14:04 crc kubenswrapper[5110]: I0126 00:14:04.315401 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 26 00:14:04 crc kubenswrapper[5110]: I0126 00:14:04.350418 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 26 00:14:04 crc kubenswrapper[5110]: I0126 00:14:04.401954 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 26 00:14:04 crc kubenswrapper[5110]: I0126 00:14:04.750257 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 26 00:14:04 crc kubenswrapper[5110]: I0126 00:14:04.755841 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 26 00:14:04 crc kubenswrapper[5110]: I0126 00:14:04.877252 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 26 00:14:04 crc kubenswrapper[5110]: I0126 00:14:04.931257 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 26 00:14:04 crc kubenswrapper[5110]: I0126 00:14:04.935202 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 26 00:14:04 crc kubenswrapper[5110]: I0126 00:14:04.977303 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 26 00:14:05 crc kubenswrapper[5110]: I0126 00:14:05.166108 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 26 00:14:05 crc kubenswrapper[5110]: I0126 00:14:05.252072 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 26 00:14:05 crc kubenswrapper[5110]: I0126 00:14:05.272215 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 26 00:14:05 crc kubenswrapper[5110]: I0126 00:14:05.290303 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 26 00:14:05 crc kubenswrapper[5110]: I0126 00:14:05.332636 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 26 00:14:05 crc kubenswrapper[5110]: I0126 00:14:05.371773 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 26 00:14:05 crc kubenswrapper[5110]: I0126 00:14:05.391369 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 26 00:14:05 crc kubenswrapper[5110]: I0126 00:14:05.431269 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 26 00:14:05 crc kubenswrapper[5110]: I0126 00:14:05.446391 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 26 00:14:05 crc kubenswrapper[5110]: I0126 00:14:05.478267 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 26 00:14:05 crc kubenswrapper[5110]: I0126 00:14:05.485461 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 26 00:14:05 crc kubenswrapper[5110]: I0126 00:14:05.530270 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 26 00:14:05 crc kubenswrapper[5110]: I0126 00:14:05.569971 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:14:05 crc kubenswrapper[5110]: I0126 00:14:05.606499 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:14:05 crc kubenswrapper[5110]: I0126 00:14:05.627275 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 26 00:14:05 crc kubenswrapper[5110]: I0126 00:14:05.688666 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 26 00:14:05 crc kubenswrapper[5110]: I0126 00:14:05.843687 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:14:06 crc kubenswrapper[5110]: I0126 00:14:06.164239 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 26 00:14:06 crc kubenswrapper[5110]: I0126 00:14:06.269393 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 26 00:14:06 crc kubenswrapper[5110]: I0126 00:14:06.321238 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 26 00:14:06 crc kubenswrapper[5110]: I0126 00:14:06.368546 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 26 00:14:06 crc kubenswrapper[5110]: I0126 00:14:06.478737 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 26 00:14:06 crc kubenswrapper[5110]: I0126 00:14:06.522682 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 26 00:14:06 crc kubenswrapper[5110]: I0126 00:14:06.539938 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 26 00:14:06 crc kubenswrapper[5110]: I0126 00:14:06.561432 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 26 00:14:06 crc kubenswrapper[5110]: I0126 00:14:06.600913 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 26 00:14:06 crc kubenswrapper[5110]: I0126 00:14:06.661090 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 26 00:14:06 crc kubenswrapper[5110]: I0126 00:14:06.684819 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:14:06 crc kubenswrapper[5110]: I0126 00:14:06.735717 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 26 00:14:06 crc kubenswrapper[5110]: I0126 00:14:06.778950 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 26 00:14:06 crc kubenswrapper[5110]: I0126 00:14:06.829057 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 26 00:14:06 crc kubenswrapper[5110]: I0126 00:14:06.834105 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 26 00:14:06 crc kubenswrapper[5110]: I0126 00:14:06.868177 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 26 00:14:06 crc kubenswrapper[5110]: I0126 00:14:06.896274 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 26 00:14:06 crc kubenswrapper[5110]: I0126 00:14:06.974735 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 26 00:14:06 crc kubenswrapper[5110]: I0126 00:14:06.981061 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 26 00:14:06 crc kubenswrapper[5110]: I0126 00:14:06.985647 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 26 00:14:06 crc kubenswrapper[5110]: I0126 00:14:06.993997 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.132395 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.251898 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.276253 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.323525 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.430513 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.448994 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.484277 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.542148 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.542177 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.562544 5110 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.567737 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-tg86c","openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.567841 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc","openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.568418 5110 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6bf35b8e-a6c5-479a-860f-db0308fb993b" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.568456 5110 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6bf35b8e-a6c5-479a-860f-db0308fb993b" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.568581 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b4b9ee78-19a7-41bf-97e4-9ab13bad2730" containerName="oauth-openshift" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.568610 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4b9ee78-19a7-41bf-97e4-9ab13bad2730" containerName="oauth-openshift" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.568651 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c1dd88ef-db06-41bc-8be3-53730c6fa57f" containerName="installer" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.568662 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1dd88ef-db06-41bc-8be3-53730c6fa57f" containerName="installer" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.568692 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="497eadd3-187e-4dfb-82c3-fad0d59eb723" containerName="image-pruner" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.568702 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="497eadd3-187e-4dfb-82c3-fad0d59eb723" containerName="image-pruner" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.568839 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="b4b9ee78-19a7-41bf-97e4-9ab13bad2730" containerName="oauth-openshift" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.568856 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="c1dd88ef-db06-41bc-8be3-53730c6fa57f" containerName="installer" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.568871 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="497eadd3-187e-4dfb-82c3-fad0d59eb723" containerName="image-pruner" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.570676 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.596359 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.597715 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.628532 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.632279 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.632376 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.639043 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.639471 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.639466 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.639665 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.639690 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.639690 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.640704 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.646563 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.648758 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.649231 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.649343 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.650373 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.650709 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.650757 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.660338 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.664493 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.664673 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x6pw\" (UniqueName: \"kubernetes.io/projected/34d42d82-664e-4e27-9378-0a1d17c4b920-kube-api-access-6x6pw\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.664724 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/34d42d82-664e-4e27-9378-0a1d17c4b920-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.664859 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/34d42d82-664e-4e27-9378-0a1d17c4b920-v4-0-config-system-service-ca\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.664899 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/34d42d82-664e-4e27-9378-0a1d17c4b920-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.664927 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/34d42d82-664e-4e27-9378-0a1d17c4b920-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.664974 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/34d42d82-664e-4e27-9378-0a1d17c4b920-v4-0-config-user-template-error\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.665007 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/34d42d82-664e-4e27-9378-0a1d17c4b920-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.665078 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/34d42d82-664e-4e27-9378-0a1d17c4b920-audit-dir\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.665136 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34d42d82-664e-4e27-9378-0a1d17c4b920-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.665204 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/34d42d82-664e-4e27-9378-0a1d17c4b920-v4-0-config-user-template-login\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.665256 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/34d42d82-664e-4e27-9378-0a1d17c4b920-audit-policies\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.665324 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/34d42d82-664e-4e27-9378-0a1d17c4b920-v4-0-config-system-session\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.665397 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/34d42d82-664e-4e27-9378-0a1d17c4b920-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.665418 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/34d42d82-664e-4e27-9378-0a1d17c4b920-v4-0-config-system-router-certs\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.697358 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=22.697312644 podStartE2EDuration="22.697312644s" podCreationTimestamp="2026-01-26 00:13:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:14:07.679456443 +0000 UTC m=+304.908355062" watchObservedRunningTime="2026-01-26 00:14:07.697312644 +0000 UTC m=+304.926211253" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.746472 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.756161 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.767106 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6x6pw\" (UniqueName: \"kubernetes.io/projected/34d42d82-664e-4e27-9378-0a1d17c4b920-kube-api-access-6x6pw\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.767183 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/34d42d82-664e-4e27-9378-0a1d17c4b920-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.767239 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/34d42d82-664e-4e27-9378-0a1d17c4b920-v4-0-config-system-service-ca\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.767278 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/34d42d82-664e-4e27-9378-0a1d17c4b920-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.767306 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/34d42d82-664e-4e27-9378-0a1d17c4b920-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.767332 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/34d42d82-664e-4e27-9378-0a1d17c4b920-v4-0-config-user-template-error\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.767364 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/34d42d82-664e-4e27-9378-0a1d17c4b920-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.767411 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/34d42d82-664e-4e27-9378-0a1d17c4b920-audit-dir\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.767481 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/34d42d82-664e-4e27-9378-0a1d17c4b920-audit-dir\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.767527 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34d42d82-664e-4e27-9378-0a1d17c4b920-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.767560 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/34d42d82-664e-4e27-9378-0a1d17c4b920-v4-0-config-user-template-login\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.767587 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/34d42d82-664e-4e27-9378-0a1d17c4b920-audit-policies\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.767616 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/34d42d82-664e-4e27-9378-0a1d17c4b920-v4-0-config-system-session\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.767650 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/34d42d82-664e-4e27-9378-0a1d17c4b920-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.767689 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/34d42d82-664e-4e27-9378-0a1d17c4b920-v4-0-config-system-router-certs\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.768513 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/34d42d82-664e-4e27-9378-0a1d17c4b920-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.768513 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/34d42d82-664e-4e27-9378-0a1d17c4b920-v4-0-config-system-service-ca\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.769062 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/34d42d82-664e-4e27-9378-0a1d17c4b920-audit-policies\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.769407 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34d42d82-664e-4e27-9378-0a1d17c4b920-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.773938 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/34d42d82-664e-4e27-9378-0a1d17c4b920-v4-0-config-system-router-certs\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.773992 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/34d42d82-664e-4e27-9378-0a1d17c4b920-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.774045 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/34d42d82-664e-4e27-9378-0a1d17c4b920-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.774074 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/34d42d82-664e-4e27-9378-0a1d17c4b920-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.774639 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/34d42d82-664e-4e27-9378-0a1d17c4b920-v4-0-config-user-template-error\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.774827 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/34d42d82-664e-4e27-9378-0a1d17c4b920-v4-0-config-system-session\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.775130 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/34d42d82-664e-4e27-9378-0a1d17c4b920-v4-0-config-user-template-login\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.775883 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/34d42d82-664e-4e27-9378-0a1d17c4b920-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.785967 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x6pw\" (UniqueName: \"kubernetes.io/projected/34d42d82-664e-4e27-9378-0a1d17c4b920-kube-api-access-6x6pw\") pod \"oauth-openshift-7b9c4656cc-6f7wc\" (UID: \"34d42d82-664e-4e27-9378-0a1d17c4b920\") " pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.805731 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.811567 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.946586 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.958356 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:07 crc kubenswrapper[5110]: I0126 00:14:07.971348 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 26 00:14:08 crc kubenswrapper[5110]: I0126 00:14:08.004386 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 26 00:14:08 crc kubenswrapper[5110]: I0126 00:14:08.045714 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 26 00:14:08 crc kubenswrapper[5110]: I0126 00:14:08.215885 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 26 00:14:08 crc kubenswrapper[5110]: I0126 00:14:08.250251 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 26 00:14:08 crc kubenswrapper[5110]: I0126 00:14:08.253005 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc"] Jan 26 00:14:08 crc kubenswrapper[5110]: I0126 00:14:08.253301 5110 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 00:14:08 crc kubenswrapper[5110]: I0126 00:14:08.270191 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 26 00:14:08 crc kubenswrapper[5110]: I0126 00:14:08.307746 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 26 00:14:08 crc kubenswrapper[5110]: I0126 00:14:08.360506 5110 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 00:14:08 crc kubenswrapper[5110]: I0126 00:14:08.360884 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://077aa94e712614c3555071929003ecea892ce2f732994a30c717a9ab162b7927" gracePeriod=5 Jan 26 00:14:08 crc kubenswrapper[5110]: I0126 00:14:08.363691 5110 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 26 00:14:08 crc kubenswrapper[5110]: I0126 00:14:08.409335 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 26 00:14:08 crc kubenswrapper[5110]: I0126 00:14:08.441513 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 26 00:14:08 crc kubenswrapper[5110]: I0126 00:14:08.490672 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 26 00:14:08 crc kubenswrapper[5110]: I0126 00:14:08.601015 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 26 00:14:08 crc kubenswrapper[5110]: I0126 00:14:08.677235 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 26 00:14:08 crc kubenswrapper[5110]: I0126 00:14:08.677307 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" event={"ID":"34d42d82-664e-4e27-9378-0a1d17c4b920","Type":"ContainerStarted","Data":"6a72fcbd3baf70e69bd26626178f8d0cff49156c59beb93bd333a542fddb7298"} Jan 26 00:14:08 crc kubenswrapper[5110]: I0126 00:14:08.677781 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" event={"ID":"34d42d82-664e-4e27-9378-0a1d17c4b920","Type":"ContainerStarted","Data":"e41088280a20e968d3f41b92338051635336886219d7d0e824421ba9f38a9e4b"} Jan 26 00:14:08 crc kubenswrapper[5110]: I0126 00:14:08.754104 5110 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 26 00:14:09 crc kubenswrapper[5110]: I0126 00:14:09.010131 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:14:09 crc kubenswrapper[5110]: I0126 00:14:09.018020 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 26 00:14:09 crc kubenswrapper[5110]: I0126 00:14:09.055173 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:14:09 crc kubenswrapper[5110]: I0126 00:14:09.142192 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 26 00:14:09 crc kubenswrapper[5110]: I0126 00:14:09.326569 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4b9ee78-19a7-41bf-97e4-9ab13bad2730" path="/var/lib/kubelet/pods/b4b9ee78-19a7-41bf-97e4-9ab13bad2730/volumes" Jan 26 00:14:09 crc kubenswrapper[5110]: I0126 00:14:09.507755 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 26 00:14:09 crc kubenswrapper[5110]: I0126 00:14:09.546422 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 26 00:14:09 crc kubenswrapper[5110]: I0126 00:14:09.552022 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 26 00:14:09 crc kubenswrapper[5110]: I0126 00:14:09.588189 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 26 00:14:09 crc kubenswrapper[5110]: I0126 00:14:09.687225 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:14:09 crc kubenswrapper[5110]: I0126 00:14:09.687709 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:09 crc kubenswrapper[5110]: I0126 00:14:09.695518 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" Jan 26 00:14:09 crc kubenswrapper[5110]: I0126 00:14:09.716621 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 26 00:14:09 crc kubenswrapper[5110]: I0126 00:14:09.732814 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 26 00:14:09 crc kubenswrapper[5110]: I0126 00:14:09.734395 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7b9c4656cc-6f7wc" podStartSLOduration=53.734376737 podStartE2EDuration="53.734376737s" podCreationTimestamp="2026-01-26 00:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:14:09.714087215 +0000 UTC m=+306.942985854" watchObservedRunningTime="2026-01-26 00:14:09.734376737 +0000 UTC m=+306.963275346" Jan 26 00:14:09 crc kubenswrapper[5110]: I0126 00:14:09.736979 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 26 00:14:09 crc kubenswrapper[5110]: I0126 00:14:09.823156 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 26 00:14:10 crc kubenswrapper[5110]: I0126 00:14:10.095021 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 26 00:14:10 crc kubenswrapper[5110]: I0126 00:14:10.126774 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 26 00:14:10 crc kubenswrapper[5110]: I0126 00:14:10.178252 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 26 00:14:10 crc kubenswrapper[5110]: I0126 00:14:10.181511 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 26 00:14:10 crc kubenswrapper[5110]: I0126 00:14:10.229162 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 26 00:14:10 crc kubenswrapper[5110]: I0126 00:14:10.306077 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 26 00:14:10 crc kubenswrapper[5110]: I0126 00:14:10.310108 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 26 00:14:10 crc kubenswrapper[5110]: I0126 00:14:10.336187 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 26 00:14:10 crc kubenswrapper[5110]: I0126 00:14:10.413454 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 26 00:14:10 crc kubenswrapper[5110]: I0126 00:14:10.495897 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 26 00:14:10 crc kubenswrapper[5110]: I0126 00:14:10.533361 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 26 00:14:10 crc kubenswrapper[5110]: I0126 00:14:10.564173 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 26 00:14:10 crc kubenswrapper[5110]: I0126 00:14:10.596087 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 26 00:14:11 crc kubenswrapper[5110]: I0126 00:14:11.038310 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 26 00:14:11 crc kubenswrapper[5110]: I0126 00:14:11.049928 5110 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 26 00:14:11 crc kubenswrapper[5110]: I0126 00:14:11.351492 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 26 00:14:11 crc kubenswrapper[5110]: I0126 00:14:11.712855 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 26 00:14:13 crc kubenswrapper[5110]: I0126 00:14:13.719565 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 26 00:14:13 crc kubenswrapper[5110]: I0126 00:14:13.720221 5110 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="077aa94e712614c3555071929003ecea892ce2f732994a30c717a9ab162b7927" exitCode=137 Jan 26 00:14:13 crc kubenswrapper[5110]: I0126 00:14:13.952520 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 26 00:14:13 crc kubenswrapper[5110]: I0126 00:14:13.953080 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:14:13 crc kubenswrapper[5110]: I0126 00:14:13.955827 5110 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 26 00:14:14 crc kubenswrapper[5110]: I0126 00:14:14.086927 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 26 00:14:14 crc kubenswrapper[5110]: I0126 00:14:14.087006 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 26 00:14:14 crc kubenswrapper[5110]: I0126 00:14:14.087159 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:14:14 crc kubenswrapper[5110]: I0126 00:14:14.087207 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:14:14 crc kubenswrapper[5110]: I0126 00:14:14.087273 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 26 00:14:14 crc kubenswrapper[5110]: I0126 00:14:14.087411 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 26 00:14:14 crc kubenswrapper[5110]: I0126 00:14:14.087526 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 26 00:14:14 crc kubenswrapper[5110]: I0126 00:14:14.087843 5110 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 00:14:14 crc kubenswrapper[5110]: I0126 00:14:14.087869 5110 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Jan 26 00:14:14 crc kubenswrapper[5110]: I0126 00:14:14.087913 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:14:14 crc kubenswrapper[5110]: I0126 00:14:14.087946 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:14:14 crc kubenswrapper[5110]: I0126 00:14:14.131198 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:14:14 crc kubenswrapper[5110]: I0126 00:14:14.189291 5110 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:14:14 crc kubenswrapper[5110]: I0126 00:14:14.189340 5110 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:14:14 crc kubenswrapper[5110]: I0126 00:14:14.189353 5110 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Jan 26 00:14:14 crc kubenswrapper[5110]: I0126 00:14:14.730343 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 26 00:14:14 crc kubenswrapper[5110]: I0126 00:14:14.731148 5110 scope.go:117] "RemoveContainer" containerID="077aa94e712614c3555071929003ecea892ce2f732994a30c717a9ab162b7927" Jan 26 00:14:14 crc kubenswrapper[5110]: I0126 00:14:14.731191 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:14:14 crc kubenswrapper[5110]: I0126 00:14:14.749279 5110 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 26 00:14:15 crc kubenswrapper[5110]: I0126 00:14:15.324832 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Jan 26 00:14:21 crc kubenswrapper[5110]: I0126 00:14:21.058454 5110 ???:1] "http: TLS handshake error from 192.168.126.11:39596: no serving certificate available for the kubelet" Jan 26 00:14:29 crc kubenswrapper[5110]: I0126 00:14:29.742651 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 26 00:14:30 crc kubenswrapper[5110]: I0126 00:14:30.851208 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 26 00:14:30 crc kubenswrapper[5110]: I0126 00:14:30.853727 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 26 00:14:30 crc kubenswrapper[5110]: I0126 00:14:30.853772 5110 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="5cb8582ca837c8461488d8ba1846f15ded0ac3505a57e90bb5b1977f2dcff6d1" exitCode=137 Jan 26 00:14:30 crc kubenswrapper[5110]: I0126 00:14:30.853845 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"5cb8582ca837c8461488d8ba1846f15ded0ac3505a57e90bb5b1977f2dcff6d1"} Jan 26 00:14:30 crc kubenswrapper[5110]: I0126 00:14:30.853889 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"b930ecf240d9f5dfdbf7c2e3da862c87ac8aa45634c4f5aecdb3b2d32f86d146"} Jan 26 00:14:30 crc kubenswrapper[5110]: I0126 00:14:30.853916 5110 scope.go:117] "RemoveContainer" containerID="201fbacf5107a565f88c0b909a2d52c716cda4ea473eb26d2c613fe248ecf434" Jan 26 00:14:31 crc kubenswrapper[5110]: I0126 00:14:31.862078 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 26 00:14:32 crc kubenswrapper[5110]: I0126 00:14:32.559250 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:14:32 crc kubenswrapper[5110]: I0126 00:14:32.957732 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 26 00:14:37 crc kubenswrapper[5110]: I0126 00:14:37.044613 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 26 00:14:38 crc kubenswrapper[5110]: I0126 00:14:38.695335 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:14:38 crc kubenswrapper[5110]: I0126 00:14:38.699516 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:14:41 crc kubenswrapper[5110]: I0126 00:14:41.972606 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 26 00:14:48 crc kubenswrapper[5110]: I0126 00:14:48.913918 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:14:49 crc kubenswrapper[5110]: I0126 00:14:49.571498 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-684d994d47-8c2v7"] Jan 26 00:14:49 crc kubenswrapper[5110]: I0126 00:14:49.571882 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-684d994d47-8c2v7" podUID="d38403be-a3f6-43a0-9957-39bc82a8870c" containerName="controller-manager" containerID="cri-o://7db8f11014a8cca9f97ca060a0b931722b85e6cdcbd2bef4f0dd9ba566249c0f" gracePeriod=30 Jan 26 00:14:49 crc kubenswrapper[5110]: I0126 00:14:49.619142 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b78f8c8c8-lrx8m"] Jan 26 00:14:49 crc kubenswrapper[5110]: I0126 00:14:49.619566 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6b78f8c8c8-lrx8m" podUID="5bbb8424-18f0-4d1c-8f78-7a42e252cafb" containerName="route-controller-manager" containerID="cri-o://f4a7fdbfd81526c18db00a68c0465b19bb2ffc7fb1d2ca10cac82ebd84a40e85" gracePeriod=30 Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.520025 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-684d994d47-8c2v7" Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.523778 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b78f8c8c8-lrx8m" Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.558789 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5dbff5d56f-8fzr8"] Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.560358 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d38403be-a3f6-43a0-9957-39bc82a8870c" containerName="controller-manager" Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.560406 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="d38403be-a3f6-43a0-9957-39bc82a8870c" containerName="controller-manager" Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.560441 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.560449 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.560473 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5bbb8424-18f0-4d1c-8f78-7a42e252cafb" containerName="route-controller-manager" Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.560479 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bbb8424-18f0-4d1c-8f78-7a42e252cafb" containerName="route-controller-manager" Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.560596 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="5bbb8424-18f0-4d1c-8f78-7a42e252cafb" containerName="route-controller-manager" Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.560608 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="d38403be-a3f6-43a0-9957-39bc82a8870c" containerName="controller-manager" Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.560616 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.613438 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d38403be-a3f6-43a0-9957-39bc82a8870c-proxy-ca-bundles\") pod \"d38403be-a3f6-43a0-9957-39bc82a8870c\" (UID: \"d38403be-a3f6-43a0-9957-39bc82a8870c\") " Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.613492 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9622\" (UniqueName: \"kubernetes.io/projected/d38403be-a3f6-43a0-9957-39bc82a8870c-kube-api-access-v9622\") pod \"d38403be-a3f6-43a0-9957-39bc82a8870c\" (UID: \"d38403be-a3f6-43a0-9957-39bc82a8870c\") " Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.613513 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d38403be-a3f6-43a0-9957-39bc82a8870c-client-ca\") pod \"d38403be-a3f6-43a0-9957-39bc82a8870c\" (UID: \"d38403be-a3f6-43a0-9957-39bc82a8870c\") " Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.613535 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5bbb8424-18f0-4d1c-8f78-7a42e252cafb-client-ca\") pod \"5bbb8424-18f0-4d1c-8f78-7a42e252cafb\" (UID: \"5bbb8424-18f0-4d1c-8f78-7a42e252cafb\") " Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.613553 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bbb8424-18f0-4d1c-8f78-7a42e252cafb-config\") pod \"5bbb8424-18f0-4d1c-8f78-7a42e252cafb\" (UID: \"5bbb8424-18f0-4d1c-8f78-7a42e252cafb\") " Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.613600 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d38403be-a3f6-43a0-9957-39bc82a8870c-tmp\") pod \"d38403be-a3f6-43a0-9957-39bc82a8870c\" (UID: \"d38403be-a3f6-43a0-9957-39bc82a8870c\") " Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.613681 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d38403be-a3f6-43a0-9957-39bc82a8870c-serving-cert\") pod \"d38403be-a3f6-43a0-9957-39bc82a8870c\" (UID: \"d38403be-a3f6-43a0-9957-39bc82a8870c\") " Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.613701 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d38403be-a3f6-43a0-9957-39bc82a8870c-config\") pod \"d38403be-a3f6-43a0-9957-39bc82a8870c\" (UID: \"d38403be-a3f6-43a0-9957-39bc82a8870c\") " Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.613825 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5bbb8424-18f0-4d1c-8f78-7a42e252cafb-tmp\") pod \"5bbb8424-18f0-4d1c-8f78-7a42e252cafb\" (UID: \"5bbb8424-18f0-4d1c-8f78-7a42e252cafb\") " Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.613853 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wp88\" (UniqueName: \"kubernetes.io/projected/5bbb8424-18f0-4d1c-8f78-7a42e252cafb-kube-api-access-8wp88\") pod \"5bbb8424-18f0-4d1c-8f78-7a42e252cafb\" (UID: \"5bbb8424-18f0-4d1c-8f78-7a42e252cafb\") " Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.613985 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bbb8424-18f0-4d1c-8f78-7a42e252cafb-serving-cert\") pod \"5bbb8424-18f0-4d1c-8f78-7a42e252cafb\" (UID: \"5bbb8424-18f0-4d1c-8f78-7a42e252cafb\") " Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.614449 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d38403be-a3f6-43a0-9957-39bc82a8870c-client-ca" (OuterVolumeSpecName: "client-ca") pod "d38403be-a3f6-43a0-9957-39bc82a8870c" (UID: "d38403be-a3f6-43a0-9957-39bc82a8870c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.614525 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d38403be-a3f6-43a0-9957-39bc82a8870c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d38403be-a3f6-43a0-9957-39bc82a8870c" (UID: "d38403be-a3f6-43a0-9957-39bc82a8870c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.614729 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bbb8424-18f0-4d1c-8f78-7a42e252cafb-tmp" (OuterVolumeSpecName: "tmp") pod "5bbb8424-18f0-4d1c-8f78-7a42e252cafb" (UID: "5bbb8424-18f0-4d1c-8f78-7a42e252cafb"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.615070 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bbb8424-18f0-4d1c-8f78-7a42e252cafb-client-ca" (OuterVolumeSpecName: "client-ca") pod "5bbb8424-18f0-4d1c-8f78-7a42e252cafb" (UID: "5bbb8424-18f0-4d1c-8f78-7a42e252cafb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.615189 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bbb8424-18f0-4d1c-8f78-7a42e252cafb-config" (OuterVolumeSpecName: "config") pod "5bbb8424-18f0-4d1c-8f78-7a42e252cafb" (UID: "5bbb8424-18f0-4d1c-8f78-7a42e252cafb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.615316 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d38403be-a3f6-43a0-9957-39bc82a8870c-tmp" (OuterVolumeSpecName: "tmp") pod "d38403be-a3f6-43a0-9957-39bc82a8870c" (UID: "d38403be-a3f6-43a0-9957-39bc82a8870c"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.615385 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d38403be-a3f6-43a0-9957-39bc82a8870c-config" (OuterVolumeSpecName: "config") pod "d38403be-a3f6-43a0-9957-39bc82a8870c" (UID: "d38403be-a3f6-43a0-9957-39bc82a8870c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.621009 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bbb8424-18f0-4d1c-8f78-7a42e252cafb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5bbb8424-18f0-4d1c-8f78-7a42e252cafb" (UID: "5bbb8424-18f0-4d1c-8f78-7a42e252cafb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.621027 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bbb8424-18f0-4d1c-8f78-7a42e252cafb-kube-api-access-8wp88" (OuterVolumeSpecName: "kube-api-access-8wp88") pod "5bbb8424-18f0-4d1c-8f78-7a42e252cafb" (UID: "5bbb8424-18f0-4d1c-8f78-7a42e252cafb"). InnerVolumeSpecName "kube-api-access-8wp88". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.621548 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d38403be-a3f6-43a0-9957-39bc82a8870c-kube-api-access-v9622" (OuterVolumeSpecName: "kube-api-access-v9622") pod "d38403be-a3f6-43a0-9957-39bc82a8870c" (UID: "d38403be-a3f6-43a0-9957-39bc82a8870c"). InnerVolumeSpecName "kube-api-access-v9622". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.622519 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d38403be-a3f6-43a0-9957-39bc82a8870c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d38403be-a3f6-43a0-9957-39bc82a8870c" (UID: "d38403be-a3f6-43a0-9957-39bc82a8870c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.716049 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d38403be-a3f6-43a0-9957-39bc82a8870c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.716097 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d38403be-a3f6-43a0-9957-39bc82a8870c-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.716111 5110 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5bbb8424-18f0-4d1c-8f78-7a42e252cafb-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.716124 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8wp88\" (UniqueName: \"kubernetes.io/projected/5bbb8424-18f0-4d1c-8f78-7a42e252cafb-kube-api-access-8wp88\") on node \"crc\" DevicePath \"\"" Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.716138 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bbb8424-18f0-4d1c-8f78-7a42e252cafb-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.716150 5110 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d38403be-a3f6-43a0-9957-39bc82a8870c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.716166 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v9622\" (UniqueName: \"kubernetes.io/projected/d38403be-a3f6-43a0-9957-39bc82a8870c-kube-api-access-v9622\") on node \"crc\" DevicePath \"\"" Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.716184 5110 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d38403be-a3f6-43a0-9957-39bc82a8870c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.716196 5110 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5bbb8424-18f0-4d1c-8f78-7a42e252cafb-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.716208 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bbb8424-18f0-4d1c-8f78-7a42e252cafb-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.716220 5110 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d38403be-a3f6-43a0-9957-39bc82a8870c-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.960046 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5dbff5d56f-8fzr8"] Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.960102 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8fb7b8df5-7n2td"] Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.960411 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5dbff5d56f-8fzr8" Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.991890 5110 generic.go:358] "Generic (PLEG): container finished" podID="5bbb8424-18f0-4d1c-8f78-7a42e252cafb" containerID="f4a7fdbfd81526c18db00a68c0465b19bb2ffc7fb1d2ca10cac82ebd84a40e85" exitCode=0 Jan 26 00:14:50 crc kubenswrapper[5110]: I0126 00:14:50.993744 5110 generic.go:358] "Generic (PLEG): container finished" podID="d38403be-a3f6-43a0-9957-39bc82a8870c" containerID="7db8f11014a8cca9f97ca060a0b931722b85e6cdcbd2bef4f0dd9ba566249c0f" exitCode=0 Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.122772 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02a8eb21-0978-4b59-827e-1a6074ab093e-proxy-ca-bundles\") pod \"controller-manager-5dbff5d56f-8fzr8\" (UID: \"02a8eb21-0978-4b59-827e-1a6074ab093e\") " pod="openshift-controller-manager/controller-manager-5dbff5d56f-8fzr8" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.122864 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02a8eb21-0978-4b59-827e-1a6074ab093e-config\") pod \"controller-manager-5dbff5d56f-8fzr8\" (UID: \"02a8eb21-0978-4b59-827e-1a6074ab093e\") " pod="openshift-controller-manager/controller-manager-5dbff5d56f-8fzr8" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.123083 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02a8eb21-0978-4b59-827e-1a6074ab093e-client-ca\") pod \"controller-manager-5dbff5d56f-8fzr8\" (UID: \"02a8eb21-0978-4b59-827e-1a6074ab093e\") " pod="openshift-controller-manager/controller-manager-5dbff5d56f-8fzr8" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.123215 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7wmd\" (UniqueName: \"kubernetes.io/projected/02a8eb21-0978-4b59-827e-1a6074ab093e-kube-api-access-s7wmd\") pod \"controller-manager-5dbff5d56f-8fzr8\" (UID: \"02a8eb21-0978-4b59-827e-1a6074ab093e\") " pod="openshift-controller-manager/controller-manager-5dbff5d56f-8fzr8" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.123254 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02a8eb21-0978-4b59-827e-1a6074ab093e-serving-cert\") pod \"controller-manager-5dbff5d56f-8fzr8\" (UID: \"02a8eb21-0978-4b59-827e-1a6074ab093e\") " pod="openshift-controller-manager/controller-manager-5dbff5d56f-8fzr8" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.123277 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/02a8eb21-0978-4b59-827e-1a6074ab093e-tmp\") pod \"controller-manager-5dbff5d56f-8fzr8\" (UID: \"02a8eb21-0978-4b59-827e-1a6074ab093e\") " pod="openshift-controller-manager/controller-manager-5dbff5d56f-8fzr8" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.225080 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02a8eb21-0978-4b59-827e-1a6074ab093e-proxy-ca-bundles\") pod \"controller-manager-5dbff5d56f-8fzr8\" (UID: \"02a8eb21-0978-4b59-827e-1a6074ab093e\") " pod="openshift-controller-manager/controller-manager-5dbff5d56f-8fzr8" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.225275 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02a8eb21-0978-4b59-827e-1a6074ab093e-config\") pod \"controller-manager-5dbff5d56f-8fzr8\" (UID: \"02a8eb21-0978-4b59-827e-1a6074ab093e\") " pod="openshift-controller-manager/controller-manager-5dbff5d56f-8fzr8" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.225305 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02a8eb21-0978-4b59-827e-1a6074ab093e-client-ca\") pod \"controller-manager-5dbff5d56f-8fzr8\" (UID: \"02a8eb21-0978-4b59-827e-1a6074ab093e\") " pod="openshift-controller-manager/controller-manager-5dbff5d56f-8fzr8" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.225346 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s7wmd\" (UniqueName: \"kubernetes.io/projected/02a8eb21-0978-4b59-827e-1a6074ab093e-kube-api-access-s7wmd\") pod \"controller-manager-5dbff5d56f-8fzr8\" (UID: \"02a8eb21-0978-4b59-827e-1a6074ab093e\") " pod="openshift-controller-manager/controller-manager-5dbff5d56f-8fzr8" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.225376 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02a8eb21-0978-4b59-827e-1a6074ab093e-serving-cert\") pod \"controller-manager-5dbff5d56f-8fzr8\" (UID: \"02a8eb21-0978-4b59-827e-1a6074ab093e\") " pod="openshift-controller-manager/controller-manager-5dbff5d56f-8fzr8" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.225399 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/02a8eb21-0978-4b59-827e-1a6074ab093e-tmp\") pod \"controller-manager-5dbff5d56f-8fzr8\" (UID: \"02a8eb21-0978-4b59-827e-1a6074ab093e\") " pod="openshift-controller-manager/controller-manager-5dbff5d56f-8fzr8" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.226015 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/02a8eb21-0978-4b59-827e-1a6074ab093e-tmp\") pod \"controller-manager-5dbff5d56f-8fzr8\" (UID: \"02a8eb21-0978-4b59-827e-1a6074ab093e\") " pod="openshift-controller-manager/controller-manager-5dbff5d56f-8fzr8" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.226829 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02a8eb21-0978-4b59-827e-1a6074ab093e-client-ca\") pod \"controller-manager-5dbff5d56f-8fzr8\" (UID: \"02a8eb21-0978-4b59-827e-1a6074ab093e\") " pod="openshift-controller-manager/controller-manager-5dbff5d56f-8fzr8" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.226845 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02a8eb21-0978-4b59-827e-1a6074ab093e-proxy-ca-bundles\") pod \"controller-manager-5dbff5d56f-8fzr8\" (UID: \"02a8eb21-0978-4b59-827e-1a6074ab093e\") " pod="openshift-controller-manager/controller-manager-5dbff5d56f-8fzr8" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.227031 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02a8eb21-0978-4b59-827e-1a6074ab093e-config\") pod \"controller-manager-5dbff5d56f-8fzr8\" (UID: \"02a8eb21-0978-4b59-827e-1a6074ab093e\") " pod="openshift-controller-manager/controller-manager-5dbff5d56f-8fzr8" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.231502 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02a8eb21-0978-4b59-827e-1a6074ab093e-serving-cert\") pod \"controller-manager-5dbff5d56f-8fzr8\" (UID: \"02a8eb21-0978-4b59-827e-1a6074ab093e\") " pod="openshift-controller-manager/controller-manager-5dbff5d56f-8fzr8" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.246062 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7wmd\" (UniqueName: \"kubernetes.io/projected/02a8eb21-0978-4b59-827e-1a6074ab093e-kube-api-access-s7wmd\") pod \"controller-manager-5dbff5d56f-8fzr8\" (UID: \"02a8eb21-0978-4b59-827e-1a6074ab093e\") " pod="openshift-controller-manager/controller-manager-5dbff5d56f-8fzr8" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.274651 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5dbff5d56f-8fzr8" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.337997 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8fb7b8df5-7n2td"] Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.338339 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8fb7b8df5-7n2td" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.338348 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b78f8c8c8-lrx8m" event={"ID":"5bbb8424-18f0-4d1c-8f78-7a42e252cafb","Type":"ContainerDied","Data":"f4a7fdbfd81526c18db00a68c0465b19bb2ffc7fb1d2ca10cac82ebd84a40e85"} Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.338394 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b78f8c8c8-lrx8m" event={"ID":"5bbb8424-18f0-4d1c-8f78-7a42e252cafb","Type":"ContainerDied","Data":"2dd4d4861434d396bbb7f1cf4c5c00653695212039197a6bcfa2bac39ca97d4d"} Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.338147 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-684d994d47-8c2v7" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.338965 5110 scope.go:117] "RemoveContainer" containerID="f4a7fdbfd81526c18db00a68c0465b19bb2ffc7fb1d2ca10cac82ebd84a40e85" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.338106 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b78f8c8c8-lrx8m" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.378166 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-684d994d47-8c2v7" event={"ID":"d38403be-a3f6-43a0-9957-39bc82a8870c","Type":"ContainerDied","Data":"7db8f11014a8cca9f97ca060a0b931722b85e6cdcbd2bef4f0dd9ba566249c0f"} Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.378223 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-684d994d47-8c2v7" event={"ID":"d38403be-a3f6-43a0-9957-39bc82a8870c","Type":"ContainerDied","Data":"e40aaf37cebda34064bfdf5608b88c7e1615c159b4cfec8165d74c703f03145f"} Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.386624 5110 scope.go:117] "RemoveContainer" containerID="f4a7fdbfd81526c18db00a68c0465b19bb2ffc7fb1d2ca10cac82ebd84a40e85" Jan 26 00:14:51 crc kubenswrapper[5110]: E0126 00:14:51.387622 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4a7fdbfd81526c18db00a68c0465b19bb2ffc7fb1d2ca10cac82ebd84a40e85\": container with ID starting with f4a7fdbfd81526c18db00a68c0465b19bb2ffc7fb1d2ca10cac82ebd84a40e85 not found: ID does not exist" containerID="f4a7fdbfd81526c18db00a68c0465b19bb2ffc7fb1d2ca10cac82ebd84a40e85" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.387713 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4a7fdbfd81526c18db00a68c0465b19bb2ffc7fb1d2ca10cac82ebd84a40e85"} err="failed to get container status \"f4a7fdbfd81526c18db00a68c0465b19bb2ffc7fb1d2ca10cac82ebd84a40e85\": rpc error: code = NotFound desc = could not find container \"f4a7fdbfd81526c18db00a68c0465b19bb2ffc7fb1d2ca10cac82ebd84a40e85\": container with ID starting with f4a7fdbfd81526c18db00a68c0465b19bb2ffc7fb1d2ca10cac82ebd84a40e85 not found: ID does not exist" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.388426 5110 scope.go:117] "RemoveContainer" containerID="7db8f11014a8cca9f97ca060a0b931722b85e6cdcbd2bef4f0dd9ba566249c0f" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.413095 5110 scope.go:117] "RemoveContainer" containerID="7db8f11014a8cca9f97ca060a0b931722b85e6cdcbd2bef4f0dd9ba566249c0f" Jan 26 00:14:51 crc kubenswrapper[5110]: E0126 00:14:51.413538 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7db8f11014a8cca9f97ca060a0b931722b85e6cdcbd2bef4f0dd9ba566249c0f\": container with ID starting with 7db8f11014a8cca9f97ca060a0b931722b85e6cdcbd2bef4f0dd9ba566249c0f not found: ID does not exist" containerID="7db8f11014a8cca9f97ca060a0b931722b85e6cdcbd2bef4f0dd9ba566249c0f" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.413572 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7db8f11014a8cca9f97ca060a0b931722b85e6cdcbd2bef4f0dd9ba566249c0f"} err="failed to get container status \"7db8f11014a8cca9f97ca060a0b931722b85e6cdcbd2bef4f0dd9ba566249c0f\": rpc error: code = NotFound desc = could not find container \"7db8f11014a8cca9f97ca060a0b931722b85e6cdcbd2bef4f0dd9ba566249c0f\": container with ID starting with 7db8f11014a8cca9f97ca060a0b931722b85e6cdcbd2bef4f0dd9ba566249c0f not found: ID does not exist" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.455016 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-684d994d47-8c2v7"] Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.463335 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-684d994d47-8c2v7"] Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.473961 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b78f8c8c8-lrx8m"] Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.477950 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b78f8c8c8-lrx8m"] Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.548998 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0e3a8647-d52f-44a4-8568-981eed756e87-tmp\") pod \"route-controller-manager-8fb7b8df5-7n2td\" (UID: \"0e3a8647-d52f-44a4-8568-981eed756e87\") " pod="openshift-route-controller-manager/route-controller-manager-8fb7b8df5-7n2td" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.549071 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e3a8647-d52f-44a4-8568-981eed756e87-config\") pod \"route-controller-manager-8fb7b8df5-7n2td\" (UID: \"0e3a8647-d52f-44a4-8568-981eed756e87\") " pod="openshift-route-controller-manager/route-controller-manager-8fb7b8df5-7n2td" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.549143 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz6hs\" (UniqueName: \"kubernetes.io/projected/0e3a8647-d52f-44a4-8568-981eed756e87-kube-api-access-fz6hs\") pod \"route-controller-manager-8fb7b8df5-7n2td\" (UID: \"0e3a8647-d52f-44a4-8568-981eed756e87\") " pod="openshift-route-controller-manager/route-controller-manager-8fb7b8df5-7n2td" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.549224 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e3a8647-d52f-44a4-8568-981eed756e87-serving-cert\") pod \"route-controller-manager-8fb7b8df5-7n2td\" (UID: \"0e3a8647-d52f-44a4-8568-981eed756e87\") " pod="openshift-route-controller-manager/route-controller-manager-8fb7b8df5-7n2td" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.549272 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e3a8647-d52f-44a4-8568-981eed756e87-client-ca\") pod \"route-controller-manager-8fb7b8df5-7n2td\" (UID: \"0e3a8647-d52f-44a4-8568-981eed756e87\") " pod="openshift-route-controller-manager/route-controller-manager-8fb7b8df5-7n2td" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.564890 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5dbff5d56f-8fzr8"] Jan 26 00:14:51 crc kubenswrapper[5110]: W0126 00:14:51.574771 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02a8eb21_0978_4b59_827e_1a6074ab093e.slice/crio-41582c41bd13389cae1616512a5866b237348a869af425cf17632a3241622346 WatchSource:0}: Error finding container 41582c41bd13389cae1616512a5866b237348a869af425cf17632a3241622346: Status 404 returned error can't find the container with id 41582c41bd13389cae1616512a5866b237348a869af425cf17632a3241622346 Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.650477 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e3a8647-d52f-44a4-8568-981eed756e87-serving-cert\") pod \"route-controller-manager-8fb7b8df5-7n2td\" (UID: \"0e3a8647-d52f-44a4-8568-981eed756e87\") " pod="openshift-route-controller-manager/route-controller-manager-8fb7b8df5-7n2td" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.650524 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e3a8647-d52f-44a4-8568-981eed756e87-client-ca\") pod \"route-controller-manager-8fb7b8df5-7n2td\" (UID: \"0e3a8647-d52f-44a4-8568-981eed756e87\") " pod="openshift-route-controller-manager/route-controller-manager-8fb7b8df5-7n2td" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.650561 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0e3a8647-d52f-44a4-8568-981eed756e87-tmp\") pod \"route-controller-manager-8fb7b8df5-7n2td\" (UID: \"0e3a8647-d52f-44a4-8568-981eed756e87\") " pod="openshift-route-controller-manager/route-controller-manager-8fb7b8df5-7n2td" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.650783 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e3a8647-d52f-44a4-8568-981eed756e87-config\") pod \"route-controller-manager-8fb7b8df5-7n2td\" (UID: \"0e3a8647-d52f-44a4-8568-981eed756e87\") " pod="openshift-route-controller-manager/route-controller-manager-8fb7b8df5-7n2td" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.650877 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fz6hs\" (UniqueName: \"kubernetes.io/projected/0e3a8647-d52f-44a4-8568-981eed756e87-kube-api-access-fz6hs\") pod \"route-controller-manager-8fb7b8df5-7n2td\" (UID: \"0e3a8647-d52f-44a4-8568-981eed756e87\") " pod="openshift-route-controller-manager/route-controller-manager-8fb7b8df5-7n2td" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.651189 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0e3a8647-d52f-44a4-8568-981eed756e87-tmp\") pod \"route-controller-manager-8fb7b8df5-7n2td\" (UID: \"0e3a8647-d52f-44a4-8568-981eed756e87\") " pod="openshift-route-controller-manager/route-controller-manager-8fb7b8df5-7n2td" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.652024 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e3a8647-d52f-44a4-8568-981eed756e87-config\") pod \"route-controller-manager-8fb7b8df5-7n2td\" (UID: \"0e3a8647-d52f-44a4-8568-981eed756e87\") " pod="openshift-route-controller-manager/route-controller-manager-8fb7b8df5-7n2td" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.652192 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e3a8647-d52f-44a4-8568-981eed756e87-client-ca\") pod \"route-controller-manager-8fb7b8df5-7n2td\" (UID: \"0e3a8647-d52f-44a4-8568-981eed756e87\") " pod="openshift-route-controller-manager/route-controller-manager-8fb7b8df5-7n2td" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.657149 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e3a8647-d52f-44a4-8568-981eed756e87-serving-cert\") pod \"route-controller-manager-8fb7b8df5-7n2td\" (UID: \"0e3a8647-d52f-44a4-8568-981eed756e87\") " pod="openshift-route-controller-manager/route-controller-manager-8fb7b8df5-7n2td" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.673535 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz6hs\" (UniqueName: \"kubernetes.io/projected/0e3a8647-d52f-44a4-8568-981eed756e87-kube-api-access-fz6hs\") pod \"route-controller-manager-8fb7b8df5-7n2td\" (UID: \"0e3a8647-d52f-44a4-8568-981eed756e87\") " pod="openshift-route-controller-manager/route-controller-manager-8fb7b8df5-7n2td" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.687310 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8fb7b8df5-7n2td" Jan 26 00:14:51 crc kubenswrapper[5110]: I0126 00:14:51.888641 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8fb7b8df5-7n2td"] Jan 26 00:14:52 crc kubenswrapper[5110]: I0126 00:14:52.004169 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8fb7b8df5-7n2td" event={"ID":"0e3a8647-d52f-44a4-8568-981eed756e87","Type":"ContainerStarted","Data":"8c49a5fb0c9067507419f968327cc1901ca693cf65f76cf9a9a37f9e2d4f09c8"} Jan 26 00:14:52 crc kubenswrapper[5110]: I0126 00:14:52.007541 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5dbff5d56f-8fzr8" event={"ID":"02a8eb21-0978-4b59-827e-1a6074ab093e","Type":"ContainerStarted","Data":"f4591e53a7b7fcce981e1bee2df125be37d724e2ea39c2ce60076f1489737efa"} Jan 26 00:14:52 crc kubenswrapper[5110]: I0126 00:14:52.007623 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5dbff5d56f-8fzr8" event={"ID":"02a8eb21-0978-4b59-827e-1a6074ab093e","Type":"ContainerStarted","Data":"41582c41bd13389cae1616512a5866b237348a869af425cf17632a3241622346"} Jan 26 00:14:52 crc kubenswrapper[5110]: I0126 00:14:52.007892 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-5dbff5d56f-8fzr8" Jan 26 00:14:52 crc kubenswrapper[5110]: I0126 00:14:52.031625 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5dbff5d56f-8fzr8" podStartSLOduration=3.031592817 podStartE2EDuration="3.031592817s" podCreationTimestamp="2026-01-26 00:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:14:52.02499058 +0000 UTC m=+349.253889209" watchObservedRunningTime="2026-01-26 00:14:52.031592817 +0000 UTC m=+349.260491426" Jan 26 00:14:52 crc kubenswrapper[5110]: I0126 00:14:52.622893 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5dbff5d56f-8fzr8" Jan 26 00:14:53 crc kubenswrapper[5110]: I0126 00:14:53.016491 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8fb7b8df5-7n2td" event={"ID":"0e3a8647-d52f-44a4-8568-981eed756e87","Type":"ContainerStarted","Data":"be03156b934022a5644a26967a80644bf6fc90b1a3c8b200fd33e0aad5065e59"} Jan 26 00:14:53 crc kubenswrapper[5110]: I0126 00:14:53.017120 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-8fb7b8df5-7n2td" Jan 26 00:14:53 crc kubenswrapper[5110]: I0126 00:14:53.022918 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8fb7b8df5-7n2td" Jan 26 00:14:53 crc kubenswrapper[5110]: I0126 00:14:53.036492 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8fb7b8df5-7n2td" podStartSLOduration=4.036467405 podStartE2EDuration="4.036467405s" podCreationTimestamp="2026-01-26 00:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:14:53.034814636 +0000 UTC m=+350.263713265" watchObservedRunningTime="2026-01-26 00:14:53.036467405 +0000 UTC m=+350.265366014" Jan 26 00:14:53 crc kubenswrapper[5110]: I0126 00:14:53.326298 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bbb8424-18f0-4d1c-8f78-7a42e252cafb" path="/var/lib/kubelet/pods/5bbb8424-18f0-4d1c-8f78-7a42e252cafb/volumes" Jan 26 00:14:53 crc kubenswrapper[5110]: I0126 00:14:53.327162 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d38403be-a3f6-43a0-9957-39bc82a8870c" path="/var/lib/kubelet/pods/d38403be-a3f6-43a0-9957-39bc82a8870c/volumes" Jan 26 00:15:00 crc kubenswrapper[5110]: I0126 00:15:00.146298 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489775-b7hwd"] Jan 26 00:15:00 crc kubenswrapper[5110]: I0126 00:15:00.194536 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489775-b7hwd"] Jan 26 00:15:00 crc kubenswrapper[5110]: I0126 00:15:00.194787 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-b7hwd" Jan 26 00:15:00 crc kubenswrapper[5110]: I0126 00:15:00.197679 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 26 00:15:00 crc kubenswrapper[5110]: I0126 00:15:00.197766 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 26 00:15:00 crc kubenswrapper[5110]: I0126 00:15:00.310838 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t55sm\" (UniqueName: \"kubernetes.io/projected/224a1e09-b7f3-4c2e-9bbb-febb8353f062-kube-api-access-t55sm\") pod \"collect-profiles-29489775-b7hwd\" (UID: \"224a1e09-b7f3-4c2e-9bbb-febb8353f062\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-b7hwd" Jan 26 00:15:00 crc kubenswrapper[5110]: I0126 00:15:00.311177 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/224a1e09-b7f3-4c2e-9bbb-febb8353f062-secret-volume\") pod \"collect-profiles-29489775-b7hwd\" (UID: \"224a1e09-b7f3-4c2e-9bbb-febb8353f062\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-b7hwd" Jan 26 00:15:00 crc kubenswrapper[5110]: I0126 00:15:00.311318 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/224a1e09-b7f3-4c2e-9bbb-febb8353f062-config-volume\") pod \"collect-profiles-29489775-b7hwd\" (UID: \"224a1e09-b7f3-4c2e-9bbb-febb8353f062\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-b7hwd" Jan 26 00:15:00 crc kubenswrapper[5110]: I0126 00:15:00.413540 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t55sm\" (UniqueName: \"kubernetes.io/projected/224a1e09-b7f3-4c2e-9bbb-febb8353f062-kube-api-access-t55sm\") pod \"collect-profiles-29489775-b7hwd\" (UID: \"224a1e09-b7f3-4c2e-9bbb-febb8353f062\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-b7hwd" Jan 26 00:15:00 crc kubenswrapper[5110]: I0126 00:15:00.413630 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/224a1e09-b7f3-4c2e-9bbb-febb8353f062-secret-volume\") pod \"collect-profiles-29489775-b7hwd\" (UID: \"224a1e09-b7f3-4c2e-9bbb-febb8353f062\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-b7hwd" Jan 26 00:15:00 crc kubenswrapper[5110]: I0126 00:15:00.413685 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/224a1e09-b7f3-4c2e-9bbb-febb8353f062-config-volume\") pod \"collect-profiles-29489775-b7hwd\" (UID: \"224a1e09-b7f3-4c2e-9bbb-febb8353f062\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-b7hwd" Jan 26 00:15:00 crc kubenswrapper[5110]: I0126 00:15:00.414922 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/224a1e09-b7f3-4c2e-9bbb-febb8353f062-config-volume\") pod \"collect-profiles-29489775-b7hwd\" (UID: \"224a1e09-b7f3-4c2e-9bbb-febb8353f062\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-b7hwd" Jan 26 00:15:00 crc kubenswrapper[5110]: I0126 00:15:00.420675 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/224a1e09-b7f3-4c2e-9bbb-febb8353f062-secret-volume\") pod \"collect-profiles-29489775-b7hwd\" (UID: \"224a1e09-b7f3-4c2e-9bbb-febb8353f062\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-b7hwd" Jan 26 00:15:00 crc kubenswrapper[5110]: I0126 00:15:00.431706 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t55sm\" (UniqueName: \"kubernetes.io/projected/224a1e09-b7f3-4c2e-9bbb-febb8353f062-kube-api-access-t55sm\") pod \"collect-profiles-29489775-b7hwd\" (UID: \"224a1e09-b7f3-4c2e-9bbb-febb8353f062\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-b7hwd" Jan 26 00:15:00 crc kubenswrapper[5110]: I0126 00:15:00.514813 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-b7hwd" Jan 26 00:15:01 crc kubenswrapper[5110]: I0126 00:15:01.412062 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489775-b7hwd"] Jan 26 00:15:02 crc kubenswrapper[5110]: I0126 00:15:02.104069 5110 generic.go:358] "Generic (PLEG): container finished" podID="224a1e09-b7f3-4c2e-9bbb-febb8353f062" containerID="64928ca80b1dc2a47fa30b61200efd0c87378dd83f70e4112db1d827658e9bd3" exitCode=0 Jan 26 00:15:02 crc kubenswrapper[5110]: I0126 00:15:02.104360 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-b7hwd" event={"ID":"224a1e09-b7f3-4c2e-9bbb-febb8353f062","Type":"ContainerDied","Data":"64928ca80b1dc2a47fa30b61200efd0c87378dd83f70e4112db1d827658e9bd3"} Jan 26 00:15:02 crc kubenswrapper[5110]: I0126 00:15:02.104399 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-b7hwd" event={"ID":"224a1e09-b7f3-4c2e-9bbb-febb8353f062","Type":"ContainerStarted","Data":"2a157f9ce8ab922cf4c3e9569d497862a32cc1ae003e612035501d81d542cae4"} Jan 26 00:15:03 crc kubenswrapper[5110]: I0126 00:15:03.431286 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-b7hwd" Jan 26 00:15:03 crc kubenswrapper[5110]: I0126 00:15:03.460591 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t55sm\" (UniqueName: \"kubernetes.io/projected/224a1e09-b7f3-4c2e-9bbb-febb8353f062-kube-api-access-t55sm\") pod \"224a1e09-b7f3-4c2e-9bbb-febb8353f062\" (UID: \"224a1e09-b7f3-4c2e-9bbb-febb8353f062\") " Jan 26 00:15:03 crc kubenswrapper[5110]: I0126 00:15:03.460660 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/224a1e09-b7f3-4c2e-9bbb-febb8353f062-secret-volume\") pod \"224a1e09-b7f3-4c2e-9bbb-febb8353f062\" (UID: \"224a1e09-b7f3-4c2e-9bbb-febb8353f062\") " Jan 26 00:15:03 crc kubenswrapper[5110]: I0126 00:15:03.460703 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/224a1e09-b7f3-4c2e-9bbb-febb8353f062-config-volume\") pod \"224a1e09-b7f3-4c2e-9bbb-febb8353f062\" (UID: \"224a1e09-b7f3-4c2e-9bbb-febb8353f062\") " Jan 26 00:15:03 crc kubenswrapper[5110]: I0126 00:15:03.461476 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/224a1e09-b7f3-4c2e-9bbb-febb8353f062-config-volume" (OuterVolumeSpecName: "config-volume") pod "224a1e09-b7f3-4c2e-9bbb-febb8353f062" (UID: "224a1e09-b7f3-4c2e-9bbb-febb8353f062"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:15:03 crc kubenswrapper[5110]: I0126 00:15:03.468831 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/224a1e09-b7f3-4c2e-9bbb-febb8353f062-kube-api-access-t55sm" (OuterVolumeSpecName: "kube-api-access-t55sm") pod "224a1e09-b7f3-4c2e-9bbb-febb8353f062" (UID: "224a1e09-b7f3-4c2e-9bbb-febb8353f062"). InnerVolumeSpecName "kube-api-access-t55sm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:15:03 crc kubenswrapper[5110]: I0126 00:15:03.469983 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/224a1e09-b7f3-4c2e-9bbb-febb8353f062-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "224a1e09-b7f3-4c2e-9bbb-febb8353f062" (UID: "224a1e09-b7f3-4c2e-9bbb-febb8353f062"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:15:03 crc kubenswrapper[5110]: I0126 00:15:03.562455 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t55sm\" (UniqueName: \"kubernetes.io/projected/224a1e09-b7f3-4c2e-9bbb-febb8353f062-kube-api-access-t55sm\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:03 crc kubenswrapper[5110]: I0126 00:15:03.562495 5110 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/224a1e09-b7f3-4c2e-9bbb-febb8353f062-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:03 crc kubenswrapper[5110]: I0126 00:15:03.562506 5110 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/224a1e09-b7f3-4c2e-9bbb-febb8353f062-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:04 crc kubenswrapper[5110]: I0126 00:15:04.120357 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-b7hwd" event={"ID":"224a1e09-b7f3-4c2e-9bbb-febb8353f062","Type":"ContainerDied","Data":"2a157f9ce8ab922cf4c3e9569d497862a32cc1ae003e612035501d81d542cae4"} Jan 26 00:15:04 crc kubenswrapper[5110]: I0126 00:15:04.120751 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a157f9ce8ab922cf4c3e9569d497862a32cc1ae003e612035501d81d542cae4" Jan 26 00:15:04 crc kubenswrapper[5110]: I0126 00:15:04.120917 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-b7hwd" Jan 26 00:15:06 crc kubenswrapper[5110]: I0126 00:15:06.189193 5110 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 00:15:32 crc kubenswrapper[5110]: I0126 00:15:32.440037 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8fb7b8df5-7n2td"] Jan 26 00:15:32 crc kubenswrapper[5110]: I0126 00:15:32.441753 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-8fb7b8df5-7n2td" podUID="0e3a8647-d52f-44a4-8568-981eed756e87" containerName="route-controller-manager" containerID="cri-o://be03156b934022a5644a26967a80644bf6fc90b1a3c8b200fd33e0aad5065e59" gracePeriod=30 Jan 26 00:15:32 crc kubenswrapper[5110]: I0126 00:15:32.939937 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8fb7b8df5-7n2td" Jan 26 00:15:32 crc kubenswrapper[5110]: I0126 00:15:32.981964 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76476445d8-tvsht"] Jan 26 00:15:32 crc kubenswrapper[5110]: I0126 00:15:32.983618 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0e3a8647-d52f-44a4-8568-981eed756e87" containerName="route-controller-manager" Jan 26 00:15:32 crc kubenswrapper[5110]: I0126 00:15:32.983643 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e3a8647-d52f-44a4-8568-981eed756e87" containerName="route-controller-manager" Jan 26 00:15:32 crc kubenswrapper[5110]: I0126 00:15:32.983667 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="224a1e09-b7f3-4c2e-9bbb-febb8353f062" containerName="collect-profiles" Jan 26 00:15:32 crc kubenswrapper[5110]: I0126 00:15:32.983675 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="224a1e09-b7f3-4c2e-9bbb-febb8353f062" containerName="collect-profiles" Jan 26 00:15:32 crc kubenswrapper[5110]: I0126 00:15:32.983789 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="0e3a8647-d52f-44a4-8568-981eed756e87" containerName="route-controller-manager" Jan 26 00:15:32 crc kubenswrapper[5110]: I0126 00:15:32.983806 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="224a1e09-b7f3-4c2e-9bbb-febb8353f062" containerName="collect-profiles" Jan 26 00:15:32 crc kubenswrapper[5110]: I0126 00:15:32.988490 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76476445d8-tvsht" Jan 26 00:15:32 crc kubenswrapper[5110]: I0126 00:15:32.992126 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e3a8647-d52f-44a4-8568-981eed756e87-config\") pod \"0e3a8647-d52f-44a4-8568-981eed756e87\" (UID: \"0e3a8647-d52f-44a4-8568-981eed756e87\") " Jan 26 00:15:32 crc kubenswrapper[5110]: I0126 00:15:32.992318 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e3a8647-d52f-44a4-8568-981eed756e87-client-ca\") pod \"0e3a8647-d52f-44a4-8568-981eed756e87\" (UID: \"0e3a8647-d52f-44a4-8568-981eed756e87\") " Jan 26 00:15:32 crc kubenswrapper[5110]: I0126 00:15:32.992361 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fz6hs\" (UniqueName: \"kubernetes.io/projected/0e3a8647-d52f-44a4-8568-981eed756e87-kube-api-access-fz6hs\") pod \"0e3a8647-d52f-44a4-8568-981eed756e87\" (UID: \"0e3a8647-d52f-44a4-8568-981eed756e87\") " Jan 26 00:15:32 crc kubenswrapper[5110]: I0126 00:15:32.992400 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e3a8647-d52f-44a4-8568-981eed756e87-serving-cert\") pod \"0e3a8647-d52f-44a4-8568-981eed756e87\" (UID: \"0e3a8647-d52f-44a4-8568-981eed756e87\") " Jan 26 00:15:32 crc kubenswrapper[5110]: I0126 00:15:32.992522 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0e3a8647-d52f-44a4-8568-981eed756e87-tmp\") pod \"0e3a8647-d52f-44a4-8568-981eed756e87\" (UID: \"0e3a8647-d52f-44a4-8568-981eed756e87\") " Jan 26 00:15:32 crc kubenswrapper[5110]: I0126 00:15:32.993014 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e3a8647-d52f-44a4-8568-981eed756e87-tmp" (OuterVolumeSpecName: "tmp") pod "0e3a8647-d52f-44a4-8568-981eed756e87" (UID: "0e3a8647-d52f-44a4-8568-981eed756e87"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:15:32 crc kubenswrapper[5110]: I0126 00:15:32.993197 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e3a8647-d52f-44a4-8568-981eed756e87-config" (OuterVolumeSpecName: "config") pod "0e3a8647-d52f-44a4-8568-981eed756e87" (UID: "0e3a8647-d52f-44a4-8568-981eed756e87"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:15:32 crc kubenswrapper[5110]: I0126 00:15:32.993568 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e3a8647-d52f-44a4-8568-981eed756e87-client-ca" (OuterVolumeSpecName: "client-ca") pod "0e3a8647-d52f-44a4-8568-981eed756e87" (UID: "0e3a8647-d52f-44a4-8568-981eed756e87"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:15:33 crc kubenswrapper[5110]: I0126 00:15:33.000563 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76476445d8-tvsht"] Jan 26 00:15:33 crc kubenswrapper[5110]: I0126 00:15:33.000690 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e3a8647-d52f-44a4-8568-981eed756e87-kube-api-access-fz6hs" (OuterVolumeSpecName: "kube-api-access-fz6hs") pod "0e3a8647-d52f-44a4-8568-981eed756e87" (UID: "0e3a8647-d52f-44a4-8568-981eed756e87"). InnerVolumeSpecName "kube-api-access-fz6hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:15:33 crc kubenswrapper[5110]: I0126 00:15:33.000966 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e3a8647-d52f-44a4-8568-981eed756e87-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0e3a8647-d52f-44a4-8568-981eed756e87" (UID: "0e3a8647-d52f-44a4-8568-981eed756e87"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:15:33 crc kubenswrapper[5110]: I0126 00:15:33.093484 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6485aaf3-240b-4e51-92bf-a74f2ea8b5f3-client-ca\") pod \"route-controller-manager-76476445d8-tvsht\" (UID: \"6485aaf3-240b-4e51-92bf-a74f2ea8b5f3\") " pod="openshift-route-controller-manager/route-controller-manager-76476445d8-tvsht" Jan 26 00:15:33 crc kubenswrapper[5110]: I0126 00:15:33.093556 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl9bq\" (UniqueName: \"kubernetes.io/projected/6485aaf3-240b-4e51-92bf-a74f2ea8b5f3-kube-api-access-hl9bq\") pod \"route-controller-manager-76476445d8-tvsht\" (UID: \"6485aaf3-240b-4e51-92bf-a74f2ea8b5f3\") " pod="openshift-route-controller-manager/route-controller-manager-76476445d8-tvsht" Jan 26 00:15:33 crc kubenswrapper[5110]: I0126 00:15:33.093685 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6485aaf3-240b-4e51-92bf-a74f2ea8b5f3-serving-cert\") pod \"route-controller-manager-76476445d8-tvsht\" (UID: \"6485aaf3-240b-4e51-92bf-a74f2ea8b5f3\") " pod="openshift-route-controller-manager/route-controller-manager-76476445d8-tvsht" Jan 26 00:15:33 crc kubenswrapper[5110]: I0126 00:15:33.093954 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6485aaf3-240b-4e51-92bf-a74f2ea8b5f3-config\") pod \"route-controller-manager-76476445d8-tvsht\" (UID: \"6485aaf3-240b-4e51-92bf-a74f2ea8b5f3\") " pod="openshift-route-controller-manager/route-controller-manager-76476445d8-tvsht" Jan 26 00:15:33 crc kubenswrapper[5110]: I0126 00:15:33.093988 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6485aaf3-240b-4e51-92bf-a74f2ea8b5f3-tmp\") pod \"route-controller-manager-76476445d8-tvsht\" (UID: \"6485aaf3-240b-4e51-92bf-a74f2ea8b5f3\") " pod="openshift-route-controller-manager/route-controller-manager-76476445d8-tvsht" Jan 26 00:15:33 crc kubenswrapper[5110]: I0126 00:15:33.094181 5110 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e3a8647-d52f-44a4-8568-981eed756e87-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:33 crc kubenswrapper[5110]: I0126 00:15:33.094201 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fz6hs\" (UniqueName: \"kubernetes.io/projected/0e3a8647-d52f-44a4-8568-981eed756e87-kube-api-access-fz6hs\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:33 crc kubenswrapper[5110]: I0126 00:15:33.094217 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e3a8647-d52f-44a4-8568-981eed756e87-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:33 crc kubenswrapper[5110]: I0126 00:15:33.094231 5110 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0e3a8647-d52f-44a4-8568-981eed756e87-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:33 crc kubenswrapper[5110]: I0126 00:15:33.094242 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e3a8647-d52f-44a4-8568-981eed756e87-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:33 crc kubenswrapper[5110]: I0126 00:15:33.195363 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hl9bq\" (UniqueName: \"kubernetes.io/projected/6485aaf3-240b-4e51-92bf-a74f2ea8b5f3-kube-api-access-hl9bq\") pod \"route-controller-manager-76476445d8-tvsht\" (UID: \"6485aaf3-240b-4e51-92bf-a74f2ea8b5f3\") " pod="openshift-route-controller-manager/route-controller-manager-76476445d8-tvsht" Jan 26 00:15:33 crc kubenswrapper[5110]: I0126 00:15:33.195838 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6485aaf3-240b-4e51-92bf-a74f2ea8b5f3-serving-cert\") pod \"route-controller-manager-76476445d8-tvsht\" (UID: \"6485aaf3-240b-4e51-92bf-a74f2ea8b5f3\") " pod="openshift-route-controller-manager/route-controller-manager-76476445d8-tvsht" Jan 26 00:15:33 crc kubenswrapper[5110]: I0126 00:15:33.196549 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6485aaf3-240b-4e51-92bf-a74f2ea8b5f3-config\") pod \"route-controller-manager-76476445d8-tvsht\" (UID: \"6485aaf3-240b-4e51-92bf-a74f2ea8b5f3\") " pod="openshift-route-controller-manager/route-controller-manager-76476445d8-tvsht" Jan 26 00:15:33 crc kubenswrapper[5110]: I0126 00:15:33.196584 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6485aaf3-240b-4e51-92bf-a74f2ea8b5f3-tmp\") pod \"route-controller-manager-76476445d8-tvsht\" (UID: \"6485aaf3-240b-4e51-92bf-a74f2ea8b5f3\") " pod="openshift-route-controller-manager/route-controller-manager-76476445d8-tvsht" Jan 26 00:15:33 crc kubenswrapper[5110]: I0126 00:15:33.196691 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6485aaf3-240b-4e51-92bf-a74f2ea8b5f3-client-ca\") pod \"route-controller-manager-76476445d8-tvsht\" (UID: \"6485aaf3-240b-4e51-92bf-a74f2ea8b5f3\") " pod="openshift-route-controller-manager/route-controller-manager-76476445d8-tvsht" Jan 26 00:15:33 crc kubenswrapper[5110]: I0126 00:15:33.197566 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6485aaf3-240b-4e51-92bf-a74f2ea8b5f3-tmp\") pod \"route-controller-manager-76476445d8-tvsht\" (UID: \"6485aaf3-240b-4e51-92bf-a74f2ea8b5f3\") " pod="openshift-route-controller-manager/route-controller-manager-76476445d8-tvsht" Jan 26 00:15:33 crc kubenswrapper[5110]: I0126 00:15:33.198025 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6485aaf3-240b-4e51-92bf-a74f2ea8b5f3-config\") pod \"route-controller-manager-76476445d8-tvsht\" (UID: \"6485aaf3-240b-4e51-92bf-a74f2ea8b5f3\") " pod="openshift-route-controller-manager/route-controller-manager-76476445d8-tvsht" Jan 26 00:15:33 crc kubenswrapper[5110]: I0126 00:15:33.198485 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6485aaf3-240b-4e51-92bf-a74f2ea8b5f3-client-ca\") pod \"route-controller-manager-76476445d8-tvsht\" (UID: \"6485aaf3-240b-4e51-92bf-a74f2ea8b5f3\") " pod="openshift-route-controller-manager/route-controller-manager-76476445d8-tvsht" Jan 26 00:15:33 crc kubenswrapper[5110]: I0126 00:15:33.200719 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6485aaf3-240b-4e51-92bf-a74f2ea8b5f3-serving-cert\") pod \"route-controller-manager-76476445d8-tvsht\" (UID: \"6485aaf3-240b-4e51-92bf-a74f2ea8b5f3\") " pod="openshift-route-controller-manager/route-controller-manager-76476445d8-tvsht" Jan 26 00:15:33 crc kubenswrapper[5110]: I0126 00:15:33.213074 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl9bq\" (UniqueName: \"kubernetes.io/projected/6485aaf3-240b-4e51-92bf-a74f2ea8b5f3-kube-api-access-hl9bq\") pod \"route-controller-manager-76476445d8-tvsht\" (UID: \"6485aaf3-240b-4e51-92bf-a74f2ea8b5f3\") " pod="openshift-route-controller-manager/route-controller-manager-76476445d8-tvsht" Jan 26 00:15:33 crc kubenswrapper[5110]: I0126 00:15:33.326145 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76476445d8-tvsht" Jan 26 00:15:33 crc kubenswrapper[5110]: I0126 00:15:33.333688 5110 generic.go:358] "Generic (PLEG): container finished" podID="0e3a8647-d52f-44a4-8568-981eed756e87" containerID="be03156b934022a5644a26967a80644bf6fc90b1a3c8b200fd33e0aad5065e59" exitCode=0 Jan 26 00:15:33 crc kubenswrapper[5110]: I0126 00:15:33.333873 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8fb7b8df5-7n2td" event={"ID":"0e3a8647-d52f-44a4-8568-981eed756e87","Type":"ContainerDied","Data":"be03156b934022a5644a26967a80644bf6fc90b1a3c8b200fd33e0aad5065e59"} Jan 26 00:15:33 crc kubenswrapper[5110]: I0126 00:15:33.333941 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8fb7b8df5-7n2td" event={"ID":"0e3a8647-d52f-44a4-8568-981eed756e87","Type":"ContainerDied","Data":"8c49a5fb0c9067507419f968327cc1901ca693cf65f76cf9a9a37f9e2d4f09c8"} Jan 26 00:15:33 crc kubenswrapper[5110]: I0126 00:15:33.333980 5110 scope.go:117] "RemoveContainer" containerID="be03156b934022a5644a26967a80644bf6fc90b1a3c8b200fd33e0aad5065e59" Jan 26 00:15:33 crc kubenswrapper[5110]: I0126 00:15:33.334180 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8fb7b8df5-7n2td" Jan 26 00:15:33 crc kubenswrapper[5110]: I0126 00:15:33.361698 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8fb7b8df5-7n2td"] Jan 26 00:15:33 crc kubenswrapper[5110]: I0126 00:15:33.364579 5110 scope.go:117] "RemoveContainer" containerID="be03156b934022a5644a26967a80644bf6fc90b1a3c8b200fd33e0aad5065e59" Jan 26 00:15:33 crc kubenswrapper[5110]: E0126 00:15:33.365246 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be03156b934022a5644a26967a80644bf6fc90b1a3c8b200fd33e0aad5065e59\": container with ID starting with be03156b934022a5644a26967a80644bf6fc90b1a3c8b200fd33e0aad5065e59 not found: ID does not exist" containerID="be03156b934022a5644a26967a80644bf6fc90b1a3c8b200fd33e0aad5065e59" Jan 26 00:15:33 crc kubenswrapper[5110]: I0126 00:15:33.365305 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be03156b934022a5644a26967a80644bf6fc90b1a3c8b200fd33e0aad5065e59"} err="failed to get container status \"be03156b934022a5644a26967a80644bf6fc90b1a3c8b200fd33e0aad5065e59\": rpc error: code = NotFound desc = could not find container \"be03156b934022a5644a26967a80644bf6fc90b1a3c8b200fd33e0aad5065e59\": container with ID starting with be03156b934022a5644a26967a80644bf6fc90b1a3c8b200fd33e0aad5065e59 not found: ID does not exist" Jan 26 00:15:33 crc kubenswrapper[5110]: I0126 00:15:33.367620 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8fb7b8df5-7n2td"] Jan 26 00:15:33 crc kubenswrapper[5110]: I0126 00:15:33.736915 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76476445d8-tvsht"] Jan 26 00:15:34 crc kubenswrapper[5110]: I0126 00:15:34.352135 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76476445d8-tvsht" event={"ID":"6485aaf3-240b-4e51-92bf-a74f2ea8b5f3","Type":"ContainerStarted","Data":"c99aa8461e0bd4f0fce9908ebac22b8b9236b41224f1be87cd3ba064f5902a8d"} Jan 26 00:15:34 crc kubenswrapper[5110]: I0126 00:15:34.352698 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-76476445d8-tvsht" Jan 26 00:15:34 crc kubenswrapper[5110]: I0126 00:15:34.352718 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76476445d8-tvsht" event={"ID":"6485aaf3-240b-4e51-92bf-a74f2ea8b5f3","Type":"ContainerStarted","Data":"37b0e116d4367258ee61c0a887706dedaa54c5da69b98de4dc2e7d4aff5d5e16"} Jan 26 00:15:34 crc kubenswrapper[5110]: I0126 00:15:34.373369 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-76476445d8-tvsht" podStartSLOduration=2.373341438 podStartE2EDuration="2.373341438s" podCreationTimestamp="2026-01-26 00:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:15:34.373218115 +0000 UTC m=+391.602116724" watchObservedRunningTime="2026-01-26 00:15:34.373341438 +0000 UTC m=+391.602240037" Jan 26 00:15:34 crc kubenswrapper[5110]: I0126 00:15:34.656060 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-76476445d8-tvsht" Jan 26 00:15:35 crc kubenswrapper[5110]: I0126 00:15:35.328930 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e3a8647-d52f-44a4-8568-981eed756e87" path="/var/lib/kubelet/pods/0e3a8647-d52f-44a4-8568-981eed756e87/volumes" Jan 26 00:15:52 crc kubenswrapper[5110]: I0126 00:15:52.454145 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5dbff5d56f-8fzr8"] Jan 26 00:15:52 crc kubenswrapper[5110]: I0126 00:15:52.455175 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5dbff5d56f-8fzr8" podUID="02a8eb21-0978-4b59-827e-1a6074ab093e" containerName="controller-manager" containerID="cri-o://f4591e53a7b7fcce981e1bee2df125be37d724e2ea39c2ce60076f1489737efa" gracePeriod=30 Jan 26 00:15:52 crc kubenswrapper[5110]: I0126 00:15:52.841115 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5dbff5d56f-8fzr8" Jan 26 00:15:52 crc kubenswrapper[5110]: I0126 00:15:52.881402 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-788b45d9bd-8rfvz"] Jan 26 00:15:52 crc kubenswrapper[5110]: I0126 00:15:52.882043 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="02a8eb21-0978-4b59-827e-1a6074ab093e" containerName="controller-manager" Jan 26 00:15:52 crc kubenswrapper[5110]: I0126 00:15:52.882063 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="02a8eb21-0978-4b59-827e-1a6074ab093e" containerName="controller-manager" Jan 26 00:15:52 crc kubenswrapper[5110]: I0126 00:15:52.882206 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="02a8eb21-0978-4b59-827e-1a6074ab093e" containerName="controller-manager" Jan 26 00:15:52 crc kubenswrapper[5110]: I0126 00:15:52.887822 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-788b45d9bd-8rfvz" Jan 26 00:15:52 crc kubenswrapper[5110]: I0126 00:15:52.892368 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-788b45d9bd-8rfvz"] Jan 26 00:15:52 crc kubenswrapper[5110]: I0126 00:15:52.916386 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02a8eb21-0978-4b59-827e-1a6074ab093e-proxy-ca-bundles\") pod \"02a8eb21-0978-4b59-827e-1a6074ab093e\" (UID: \"02a8eb21-0978-4b59-827e-1a6074ab093e\") " Jan 26 00:15:52 crc kubenswrapper[5110]: I0126 00:15:52.916461 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7wmd\" (UniqueName: \"kubernetes.io/projected/02a8eb21-0978-4b59-827e-1a6074ab093e-kube-api-access-s7wmd\") pod \"02a8eb21-0978-4b59-827e-1a6074ab093e\" (UID: \"02a8eb21-0978-4b59-827e-1a6074ab093e\") " Jan 26 00:15:52 crc kubenswrapper[5110]: I0126 00:15:52.916553 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02a8eb21-0978-4b59-827e-1a6074ab093e-config\") pod \"02a8eb21-0978-4b59-827e-1a6074ab093e\" (UID: \"02a8eb21-0978-4b59-827e-1a6074ab093e\") " Jan 26 00:15:52 crc kubenswrapper[5110]: I0126 00:15:52.916596 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02a8eb21-0978-4b59-827e-1a6074ab093e-serving-cert\") pod \"02a8eb21-0978-4b59-827e-1a6074ab093e\" (UID: \"02a8eb21-0978-4b59-827e-1a6074ab093e\") " Jan 26 00:15:52 crc kubenswrapper[5110]: I0126 00:15:52.916661 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/02a8eb21-0978-4b59-827e-1a6074ab093e-tmp\") pod \"02a8eb21-0978-4b59-827e-1a6074ab093e\" (UID: \"02a8eb21-0978-4b59-827e-1a6074ab093e\") " Jan 26 00:15:52 crc kubenswrapper[5110]: I0126 00:15:52.917115 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02a8eb21-0978-4b59-827e-1a6074ab093e-tmp" (OuterVolumeSpecName: "tmp") pod "02a8eb21-0978-4b59-827e-1a6074ab093e" (UID: "02a8eb21-0978-4b59-827e-1a6074ab093e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:15:52 crc kubenswrapper[5110]: I0126 00:15:52.917263 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02a8eb21-0978-4b59-827e-1a6074ab093e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "02a8eb21-0978-4b59-827e-1a6074ab093e" (UID: "02a8eb21-0978-4b59-827e-1a6074ab093e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:15:52 crc kubenswrapper[5110]: I0126 00:15:52.917339 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02a8eb21-0978-4b59-827e-1a6074ab093e-config" (OuterVolumeSpecName: "config") pod "02a8eb21-0978-4b59-827e-1a6074ab093e" (UID: "02a8eb21-0978-4b59-827e-1a6074ab093e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:15:52 crc kubenswrapper[5110]: I0126 00:15:52.917779 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02a8eb21-0978-4b59-827e-1a6074ab093e-client-ca\") pod \"02a8eb21-0978-4b59-827e-1a6074ab093e\" (UID: \"02a8eb21-0978-4b59-827e-1a6074ab093e\") " Jan 26 00:15:52 crc kubenswrapper[5110]: I0126 00:15:52.918083 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8934e75-c097-47a6-a1b1-25dc22256124-serving-cert\") pod \"controller-manager-788b45d9bd-8rfvz\" (UID: \"d8934e75-c097-47a6-a1b1-25dc22256124\") " pod="openshift-controller-manager/controller-manager-788b45d9bd-8rfvz" Jan 26 00:15:52 crc kubenswrapper[5110]: I0126 00:15:52.918136 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8934e75-c097-47a6-a1b1-25dc22256124-config\") pod \"controller-manager-788b45d9bd-8rfvz\" (UID: \"d8934e75-c097-47a6-a1b1-25dc22256124\") " pod="openshift-controller-manager/controller-manager-788b45d9bd-8rfvz" Jan 26 00:15:52 crc kubenswrapper[5110]: I0126 00:15:52.918290 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8934e75-c097-47a6-a1b1-25dc22256124-proxy-ca-bundles\") pod \"controller-manager-788b45d9bd-8rfvz\" (UID: \"d8934e75-c097-47a6-a1b1-25dc22256124\") " pod="openshift-controller-manager/controller-manager-788b45d9bd-8rfvz" Jan 26 00:15:52 crc kubenswrapper[5110]: I0126 00:15:52.918394 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02a8eb21-0978-4b59-827e-1a6074ab093e-client-ca" (OuterVolumeSpecName: "client-ca") pod "02a8eb21-0978-4b59-827e-1a6074ab093e" (UID: "02a8eb21-0978-4b59-827e-1a6074ab093e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:15:52 crc kubenswrapper[5110]: I0126 00:15:52.918482 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf47h\" (UniqueName: \"kubernetes.io/projected/d8934e75-c097-47a6-a1b1-25dc22256124-kube-api-access-mf47h\") pod \"controller-manager-788b45d9bd-8rfvz\" (UID: \"d8934e75-c097-47a6-a1b1-25dc22256124\") " pod="openshift-controller-manager/controller-manager-788b45d9bd-8rfvz" Jan 26 00:15:52 crc kubenswrapper[5110]: I0126 00:15:52.918515 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8934e75-c097-47a6-a1b1-25dc22256124-client-ca\") pod \"controller-manager-788b45d9bd-8rfvz\" (UID: \"d8934e75-c097-47a6-a1b1-25dc22256124\") " pod="openshift-controller-manager/controller-manager-788b45d9bd-8rfvz" Jan 26 00:15:52 crc kubenswrapper[5110]: I0126 00:15:52.918540 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d8934e75-c097-47a6-a1b1-25dc22256124-tmp\") pod \"controller-manager-788b45d9bd-8rfvz\" (UID: \"d8934e75-c097-47a6-a1b1-25dc22256124\") " pod="openshift-controller-manager/controller-manager-788b45d9bd-8rfvz" Jan 26 00:15:52 crc kubenswrapper[5110]: I0126 00:15:52.918607 5110 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02a8eb21-0978-4b59-827e-1a6074ab093e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:52 crc kubenswrapper[5110]: I0126 00:15:52.918623 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02a8eb21-0978-4b59-827e-1a6074ab093e-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:52 crc kubenswrapper[5110]: I0126 00:15:52.918636 5110 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/02a8eb21-0978-4b59-827e-1a6074ab093e-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:52 crc kubenswrapper[5110]: I0126 00:15:52.918646 5110 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02a8eb21-0978-4b59-827e-1a6074ab093e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:52 crc kubenswrapper[5110]: I0126 00:15:52.923206 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02a8eb21-0978-4b59-827e-1a6074ab093e-kube-api-access-s7wmd" (OuterVolumeSpecName: "kube-api-access-s7wmd") pod "02a8eb21-0978-4b59-827e-1a6074ab093e" (UID: "02a8eb21-0978-4b59-827e-1a6074ab093e"). InnerVolumeSpecName "kube-api-access-s7wmd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:15:52 crc kubenswrapper[5110]: I0126 00:15:52.923723 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02a8eb21-0978-4b59-827e-1a6074ab093e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "02a8eb21-0978-4b59-827e-1a6074ab093e" (UID: "02a8eb21-0978-4b59-827e-1a6074ab093e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:15:53 crc kubenswrapper[5110]: I0126 00:15:53.020286 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8934e75-c097-47a6-a1b1-25dc22256124-client-ca\") pod \"controller-manager-788b45d9bd-8rfvz\" (UID: \"d8934e75-c097-47a6-a1b1-25dc22256124\") " pod="openshift-controller-manager/controller-manager-788b45d9bd-8rfvz" Jan 26 00:15:53 crc kubenswrapper[5110]: I0126 00:15:53.020349 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d8934e75-c097-47a6-a1b1-25dc22256124-tmp\") pod \"controller-manager-788b45d9bd-8rfvz\" (UID: \"d8934e75-c097-47a6-a1b1-25dc22256124\") " pod="openshift-controller-manager/controller-manager-788b45d9bd-8rfvz" Jan 26 00:15:53 crc kubenswrapper[5110]: I0126 00:15:53.020384 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8934e75-c097-47a6-a1b1-25dc22256124-serving-cert\") pod \"controller-manager-788b45d9bd-8rfvz\" (UID: \"d8934e75-c097-47a6-a1b1-25dc22256124\") " pod="openshift-controller-manager/controller-manager-788b45d9bd-8rfvz" Jan 26 00:15:53 crc kubenswrapper[5110]: I0126 00:15:53.020408 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8934e75-c097-47a6-a1b1-25dc22256124-config\") pod \"controller-manager-788b45d9bd-8rfvz\" (UID: \"d8934e75-c097-47a6-a1b1-25dc22256124\") " pod="openshift-controller-manager/controller-manager-788b45d9bd-8rfvz" Jan 26 00:15:53 crc kubenswrapper[5110]: I0126 00:15:53.020446 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8934e75-c097-47a6-a1b1-25dc22256124-proxy-ca-bundles\") pod \"controller-manager-788b45d9bd-8rfvz\" (UID: \"d8934e75-c097-47a6-a1b1-25dc22256124\") " pod="openshift-controller-manager/controller-manager-788b45d9bd-8rfvz" Jan 26 00:15:53 crc kubenswrapper[5110]: I0126 00:15:53.020488 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mf47h\" (UniqueName: \"kubernetes.io/projected/d8934e75-c097-47a6-a1b1-25dc22256124-kube-api-access-mf47h\") pod \"controller-manager-788b45d9bd-8rfvz\" (UID: \"d8934e75-c097-47a6-a1b1-25dc22256124\") " pod="openshift-controller-manager/controller-manager-788b45d9bd-8rfvz" Jan 26 00:15:53 crc kubenswrapper[5110]: I0126 00:15:53.020532 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s7wmd\" (UniqueName: \"kubernetes.io/projected/02a8eb21-0978-4b59-827e-1a6074ab093e-kube-api-access-s7wmd\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:53 crc kubenswrapper[5110]: I0126 00:15:53.020544 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02a8eb21-0978-4b59-827e-1a6074ab093e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:53 crc kubenswrapper[5110]: I0126 00:15:53.021659 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d8934e75-c097-47a6-a1b1-25dc22256124-tmp\") pod \"controller-manager-788b45d9bd-8rfvz\" (UID: \"d8934e75-c097-47a6-a1b1-25dc22256124\") " pod="openshift-controller-manager/controller-manager-788b45d9bd-8rfvz" Jan 26 00:15:53 crc kubenswrapper[5110]: I0126 00:15:53.022270 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8934e75-c097-47a6-a1b1-25dc22256124-client-ca\") pod \"controller-manager-788b45d9bd-8rfvz\" (UID: \"d8934e75-c097-47a6-a1b1-25dc22256124\") " pod="openshift-controller-manager/controller-manager-788b45d9bd-8rfvz" Jan 26 00:15:53 crc kubenswrapper[5110]: I0126 00:15:53.022292 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8934e75-c097-47a6-a1b1-25dc22256124-proxy-ca-bundles\") pod \"controller-manager-788b45d9bd-8rfvz\" (UID: \"d8934e75-c097-47a6-a1b1-25dc22256124\") " pod="openshift-controller-manager/controller-manager-788b45d9bd-8rfvz" Jan 26 00:15:53 crc kubenswrapper[5110]: I0126 00:15:53.022660 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8934e75-c097-47a6-a1b1-25dc22256124-config\") pod \"controller-manager-788b45d9bd-8rfvz\" (UID: \"d8934e75-c097-47a6-a1b1-25dc22256124\") " pod="openshift-controller-manager/controller-manager-788b45d9bd-8rfvz" Jan 26 00:15:53 crc kubenswrapper[5110]: I0126 00:15:53.026856 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8934e75-c097-47a6-a1b1-25dc22256124-serving-cert\") pod \"controller-manager-788b45d9bd-8rfvz\" (UID: \"d8934e75-c097-47a6-a1b1-25dc22256124\") " pod="openshift-controller-manager/controller-manager-788b45d9bd-8rfvz" Jan 26 00:15:53 crc kubenswrapper[5110]: I0126 00:15:53.037374 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf47h\" (UniqueName: \"kubernetes.io/projected/d8934e75-c097-47a6-a1b1-25dc22256124-kube-api-access-mf47h\") pod \"controller-manager-788b45d9bd-8rfvz\" (UID: \"d8934e75-c097-47a6-a1b1-25dc22256124\") " pod="openshift-controller-manager/controller-manager-788b45d9bd-8rfvz" Jan 26 00:15:53 crc kubenswrapper[5110]: I0126 00:15:53.253157 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-788b45d9bd-8rfvz" Jan 26 00:15:53 crc kubenswrapper[5110]: I0126 00:15:53.488461 5110 generic.go:358] "Generic (PLEG): container finished" podID="02a8eb21-0978-4b59-827e-1a6074ab093e" containerID="f4591e53a7b7fcce981e1bee2df125be37d724e2ea39c2ce60076f1489737efa" exitCode=0 Jan 26 00:15:53 crc kubenswrapper[5110]: I0126 00:15:53.488661 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5dbff5d56f-8fzr8" Jan 26 00:15:53 crc kubenswrapper[5110]: I0126 00:15:53.488699 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5dbff5d56f-8fzr8" event={"ID":"02a8eb21-0978-4b59-827e-1a6074ab093e","Type":"ContainerDied","Data":"f4591e53a7b7fcce981e1bee2df125be37d724e2ea39c2ce60076f1489737efa"} Jan 26 00:15:53 crc kubenswrapper[5110]: I0126 00:15:53.489093 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5dbff5d56f-8fzr8" event={"ID":"02a8eb21-0978-4b59-827e-1a6074ab093e","Type":"ContainerDied","Data":"41582c41bd13389cae1616512a5866b237348a869af425cf17632a3241622346"} Jan 26 00:15:53 crc kubenswrapper[5110]: I0126 00:15:53.489120 5110 scope.go:117] "RemoveContainer" containerID="f4591e53a7b7fcce981e1bee2df125be37d724e2ea39c2ce60076f1489737efa" Jan 26 00:15:53 crc kubenswrapper[5110]: I0126 00:15:53.502651 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-788b45d9bd-8rfvz"] Jan 26 00:15:53 crc kubenswrapper[5110]: I0126 00:15:53.510437 5110 scope.go:117] "RemoveContainer" containerID="f4591e53a7b7fcce981e1bee2df125be37d724e2ea39c2ce60076f1489737efa" Jan 26 00:15:53 crc kubenswrapper[5110]: E0126 00:15:53.512238 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4591e53a7b7fcce981e1bee2df125be37d724e2ea39c2ce60076f1489737efa\": container with ID starting with f4591e53a7b7fcce981e1bee2df125be37d724e2ea39c2ce60076f1489737efa not found: ID does not exist" containerID="f4591e53a7b7fcce981e1bee2df125be37d724e2ea39c2ce60076f1489737efa" Jan 26 00:15:53 crc kubenswrapper[5110]: I0126 00:15:53.512290 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4591e53a7b7fcce981e1bee2df125be37d724e2ea39c2ce60076f1489737efa"} err="failed to get container status \"f4591e53a7b7fcce981e1bee2df125be37d724e2ea39c2ce60076f1489737efa\": rpc error: code = NotFound desc = could not find container \"f4591e53a7b7fcce981e1bee2df125be37d724e2ea39c2ce60076f1489737efa\": container with ID starting with f4591e53a7b7fcce981e1bee2df125be37d724e2ea39c2ce60076f1489737efa not found: ID does not exist" Jan 26 00:15:53 crc kubenswrapper[5110]: I0126 00:15:53.531361 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5dbff5d56f-8fzr8"] Jan 26 00:15:53 crc kubenswrapper[5110]: I0126 00:15:53.543719 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5dbff5d56f-8fzr8"] Jan 26 00:15:54 crc kubenswrapper[5110]: I0126 00:15:54.500126 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-788b45d9bd-8rfvz" event={"ID":"d8934e75-c097-47a6-a1b1-25dc22256124","Type":"ContainerStarted","Data":"8b619bbde1aa5b0fb702c9c4ba4feade9fe396a1b2d35af09b41174207c1b0ac"} Jan 26 00:15:54 crc kubenswrapper[5110]: I0126 00:15:54.500600 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-788b45d9bd-8rfvz" event={"ID":"d8934e75-c097-47a6-a1b1-25dc22256124","Type":"ContainerStarted","Data":"24028fa8b68d30949343eb1456d2e794bbba6873736af539927ee8534df6ec7d"} Jan 26 00:15:54 crc kubenswrapper[5110]: I0126 00:15:54.500633 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-788b45d9bd-8rfvz" Jan 26 00:15:54 crc kubenswrapper[5110]: I0126 00:15:54.576513 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-788b45d9bd-8rfvz" Jan 26 00:15:54 crc kubenswrapper[5110]: I0126 00:15:54.607442 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-788b45d9bd-8rfvz" podStartSLOduration=2.60740426 podStartE2EDuration="2.60740426s" podCreationTimestamp="2026-01-26 00:15:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:15:54.527199532 +0000 UTC m=+411.756098231" watchObservedRunningTime="2026-01-26 00:15:54.60740426 +0000 UTC m=+411.836302879" Jan 26 00:15:55 crc kubenswrapper[5110]: I0126 00:15:55.331424 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02a8eb21-0978-4b59-827e-1a6074ab093e" path="/var/lib/kubelet/pods/02a8eb21-0978-4b59-827e-1a6074ab093e/volumes" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.072373 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2dlcz"] Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.076048 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2dlcz" podUID="4276f77a-a21c-47f2-9902-e37c3ab865f5" containerName="registry-server" containerID="cri-o://68c63d9601cce8965e267e11a19e683789e44bdb30504077e32ea3f516b8f166" gracePeriod=30 Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.080520 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c2w74"] Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.081076 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c2w74" podUID="20cad200-6605-4fac-b28d-b84cd2d74a89" containerName="registry-server" containerID="cri-o://6f53b90ec9de29e7f8617fcaff0bb12a1efcfb7bf0d08d73c5d8f9e78d45a4b7" gracePeriod=30 Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.106785 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-bjwbh"] Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.107788 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-bjwbh" podUID="4ba4d5a4-755b-4d6e-b250-cd705b244775" containerName="marketplace-operator" containerID="cri-o://64fc4338a39da7ed8a6a82def166294c46ef7187f1462f5c6f71b68a028d6dc9" gracePeriod=30 Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.120680 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5g6gk"] Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.121079 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5g6gk" podUID="063f39d3-4de8-4c26-99fc-3148a8738541" containerName="registry-server" containerID="cri-o://612200d4a7f3c5db909d8b17a3551fc7a4bec36c082ea2e609088c07cca8197f" gracePeriod=30 Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.135342 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-25lrq"] Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.150551 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-25lrq" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.150388 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d6ths"] Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.151768 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-25lrq"] Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.152090 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d6ths" podUID="168e3c2a-eb5e-451a-bba9-93e41fb1e958" containerName="registry-server" containerID="cri-o://b5cc787dd637231a5a60a4bbefe8f7a5e7c978f0b3196dbd616e501d6092c251" gracePeriod=30 Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.269019 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a1ada370-69b1-4b43-9a4b-95006bc2f1c7-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-25lrq\" (UID: \"a1ada370-69b1-4b43-9a4b-95006bc2f1c7\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-25lrq" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.269577 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a1ada370-69b1-4b43-9a4b-95006bc2f1c7-tmp\") pod \"marketplace-operator-547dbd544d-25lrq\" (UID: \"a1ada370-69b1-4b43-9a4b-95006bc2f1c7\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-25lrq" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.269754 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlb25\" (UniqueName: \"kubernetes.io/projected/a1ada370-69b1-4b43-9a4b-95006bc2f1c7-kube-api-access-zlb25\") pod \"marketplace-operator-547dbd544d-25lrq\" (UID: \"a1ada370-69b1-4b43-9a4b-95006bc2f1c7\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-25lrq" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.269806 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a1ada370-69b1-4b43-9a4b-95006bc2f1c7-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-25lrq\" (UID: \"a1ada370-69b1-4b43-9a4b-95006bc2f1c7\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-25lrq" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.371890 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zlb25\" (UniqueName: \"kubernetes.io/projected/a1ada370-69b1-4b43-9a4b-95006bc2f1c7-kube-api-access-zlb25\") pod \"marketplace-operator-547dbd544d-25lrq\" (UID: \"a1ada370-69b1-4b43-9a4b-95006bc2f1c7\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-25lrq" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.371960 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a1ada370-69b1-4b43-9a4b-95006bc2f1c7-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-25lrq\" (UID: \"a1ada370-69b1-4b43-9a4b-95006bc2f1c7\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-25lrq" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.372056 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a1ada370-69b1-4b43-9a4b-95006bc2f1c7-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-25lrq\" (UID: \"a1ada370-69b1-4b43-9a4b-95006bc2f1c7\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-25lrq" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.372104 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a1ada370-69b1-4b43-9a4b-95006bc2f1c7-tmp\") pod \"marketplace-operator-547dbd544d-25lrq\" (UID: \"a1ada370-69b1-4b43-9a4b-95006bc2f1c7\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-25lrq" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.373144 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a1ada370-69b1-4b43-9a4b-95006bc2f1c7-tmp\") pod \"marketplace-operator-547dbd544d-25lrq\" (UID: \"a1ada370-69b1-4b43-9a4b-95006bc2f1c7\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-25lrq" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.374886 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a1ada370-69b1-4b43-9a4b-95006bc2f1c7-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-25lrq\" (UID: \"a1ada370-69b1-4b43-9a4b-95006bc2f1c7\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-25lrq" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.392632 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a1ada370-69b1-4b43-9a4b-95006bc2f1c7-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-25lrq\" (UID: \"a1ada370-69b1-4b43-9a4b-95006bc2f1c7\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-25lrq" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.397577 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlb25\" (UniqueName: \"kubernetes.io/projected/a1ada370-69b1-4b43-9a4b-95006bc2f1c7-kube-api-access-zlb25\") pod \"marketplace-operator-547dbd544d-25lrq\" (UID: \"a1ada370-69b1-4b43-9a4b-95006bc2f1c7\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-25lrq" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.557526 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-25lrq" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.560077 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2dlcz" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.570508 5110 generic.go:358] "Generic (PLEG): container finished" podID="4276f77a-a21c-47f2-9902-e37c3ab865f5" containerID="68c63d9601cce8965e267e11a19e683789e44bdb30504077e32ea3f516b8f166" exitCode=0 Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.570592 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dlcz" event={"ID":"4276f77a-a21c-47f2-9902-e37c3ab865f5","Type":"ContainerDied","Data":"68c63d9601cce8965e267e11a19e683789e44bdb30504077e32ea3f516b8f166"} Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.570631 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dlcz" event={"ID":"4276f77a-a21c-47f2-9902-e37c3ab865f5","Type":"ContainerDied","Data":"8d921239e7a5def225333d6a4aef3f5dcaf7bc2d44478541ac5e5e15c3c1533c"} Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.570654 5110 scope.go:117] "RemoveContainer" containerID="68c63d9601cce8965e267e11a19e683789e44bdb30504077e32ea3f516b8f166" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.570859 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2dlcz" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.589828 5110 generic.go:358] "Generic (PLEG): container finished" podID="168e3c2a-eb5e-451a-bba9-93e41fb1e958" containerID="b5cc787dd637231a5a60a4bbefe8f7a5e7c978f0b3196dbd616e501d6092c251" exitCode=0 Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.590001 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d6ths" event={"ID":"168e3c2a-eb5e-451a-bba9-93e41fb1e958","Type":"ContainerDied","Data":"b5cc787dd637231a5a60a4bbefe8f7a5e7c978f0b3196dbd616e501d6092c251"} Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.599289 5110 generic.go:358] "Generic (PLEG): container finished" podID="20cad200-6605-4fac-b28d-b84cd2d74a89" containerID="6f53b90ec9de29e7f8617fcaff0bb12a1efcfb7bf0d08d73c5d8f9e78d45a4b7" exitCode=0 Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.599383 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2w74" event={"ID":"20cad200-6605-4fac-b28d-b84cd2d74a89","Type":"ContainerDied","Data":"6f53b90ec9de29e7f8617fcaff0bb12a1efcfb7bf0d08d73c5d8f9e78d45a4b7"} Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.605104 5110 generic.go:358] "Generic (PLEG): container finished" podID="063f39d3-4de8-4c26-99fc-3148a8738541" containerID="612200d4a7f3c5db909d8b17a3551fc7a4bec36c082ea2e609088c07cca8197f" exitCode=0 Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.605253 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5g6gk" event={"ID":"063f39d3-4de8-4c26-99fc-3148a8738541","Type":"ContainerDied","Data":"612200d4a7f3c5db909d8b17a3551fc7a4bec36c082ea2e609088c07cca8197f"} Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.608048 5110 generic.go:358] "Generic (PLEG): container finished" podID="4ba4d5a4-755b-4d6e-b250-cd705b244775" containerID="64fc4338a39da7ed8a6a82def166294c46ef7187f1462f5c6f71b68a028d6dc9" exitCode=0 Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.608166 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-bjwbh" event={"ID":"4ba4d5a4-755b-4d6e-b250-cd705b244775","Type":"ContainerDied","Data":"64fc4338a39da7ed8a6a82def166294c46ef7187f1462f5c6f71b68a028d6dc9"} Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.628057 5110 scope.go:117] "RemoveContainer" containerID="656781247ee60f7261dbe34ff63b4f0c54015fdb182ebcaaafd408adf49e9102" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.655205 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c2w74" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.657401 5110 scope.go:117] "RemoveContainer" containerID="9636bde2ac0001cb1087d339beadf9f5cde94dcccb034a0ea012d3112d49aefc" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.660541 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5g6gk" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.666161 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-bjwbh" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.679356 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4276f77a-a21c-47f2-9902-e37c3ab865f5-catalog-content\") pod \"4276f77a-a21c-47f2-9902-e37c3ab865f5\" (UID: \"4276f77a-a21c-47f2-9902-e37c3ab865f5\") " Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.679408 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4276f77a-a21c-47f2-9902-e37c3ab865f5-utilities\") pod \"4276f77a-a21c-47f2-9902-e37c3ab865f5\" (UID: \"4276f77a-a21c-47f2-9902-e37c3ab865f5\") " Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.679628 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndpgg\" (UniqueName: \"kubernetes.io/projected/4276f77a-a21c-47f2-9902-e37c3ab865f5-kube-api-access-ndpgg\") pod \"4276f77a-a21c-47f2-9902-e37c3ab865f5\" (UID: \"4276f77a-a21c-47f2-9902-e37c3ab865f5\") " Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.684089 5110 scope.go:117] "RemoveContainer" containerID="68c63d9601cce8965e267e11a19e683789e44bdb30504077e32ea3f516b8f166" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.685194 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4276f77a-a21c-47f2-9902-e37c3ab865f5-utilities" (OuterVolumeSpecName: "utilities") pod "4276f77a-a21c-47f2-9902-e37c3ab865f5" (UID: "4276f77a-a21c-47f2-9902-e37c3ab865f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:16:01 crc kubenswrapper[5110]: E0126 00:16:01.685605 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68c63d9601cce8965e267e11a19e683789e44bdb30504077e32ea3f516b8f166\": container with ID starting with 68c63d9601cce8965e267e11a19e683789e44bdb30504077e32ea3f516b8f166 not found: ID does not exist" containerID="68c63d9601cce8965e267e11a19e683789e44bdb30504077e32ea3f516b8f166" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.685651 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68c63d9601cce8965e267e11a19e683789e44bdb30504077e32ea3f516b8f166"} err="failed to get container status \"68c63d9601cce8965e267e11a19e683789e44bdb30504077e32ea3f516b8f166\": rpc error: code = NotFound desc = could not find container \"68c63d9601cce8965e267e11a19e683789e44bdb30504077e32ea3f516b8f166\": container with ID starting with 68c63d9601cce8965e267e11a19e683789e44bdb30504077e32ea3f516b8f166 not found: ID does not exist" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.685674 5110 scope.go:117] "RemoveContainer" containerID="656781247ee60f7261dbe34ff63b4f0c54015fdb182ebcaaafd408adf49e9102" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.685747 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4276f77a-a21c-47f2-9902-e37c3ab865f5-kube-api-access-ndpgg" (OuterVolumeSpecName: "kube-api-access-ndpgg") pod "4276f77a-a21c-47f2-9902-e37c3ab865f5" (UID: "4276f77a-a21c-47f2-9902-e37c3ab865f5"). InnerVolumeSpecName "kube-api-access-ndpgg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:16:01 crc kubenswrapper[5110]: E0126 00:16:01.686068 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"656781247ee60f7261dbe34ff63b4f0c54015fdb182ebcaaafd408adf49e9102\": container with ID starting with 656781247ee60f7261dbe34ff63b4f0c54015fdb182ebcaaafd408adf49e9102 not found: ID does not exist" containerID="656781247ee60f7261dbe34ff63b4f0c54015fdb182ebcaaafd408adf49e9102" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.686122 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"656781247ee60f7261dbe34ff63b4f0c54015fdb182ebcaaafd408adf49e9102"} err="failed to get container status \"656781247ee60f7261dbe34ff63b4f0c54015fdb182ebcaaafd408adf49e9102\": rpc error: code = NotFound desc = could not find container \"656781247ee60f7261dbe34ff63b4f0c54015fdb182ebcaaafd408adf49e9102\": container with ID starting with 656781247ee60f7261dbe34ff63b4f0c54015fdb182ebcaaafd408adf49e9102 not found: ID does not exist" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.686166 5110 scope.go:117] "RemoveContainer" containerID="9636bde2ac0001cb1087d339beadf9f5cde94dcccb034a0ea012d3112d49aefc" Jan 26 00:16:01 crc kubenswrapper[5110]: E0126 00:16:01.686504 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9636bde2ac0001cb1087d339beadf9f5cde94dcccb034a0ea012d3112d49aefc\": container with ID starting with 9636bde2ac0001cb1087d339beadf9f5cde94dcccb034a0ea012d3112d49aefc not found: ID does not exist" containerID="9636bde2ac0001cb1087d339beadf9f5cde94dcccb034a0ea012d3112d49aefc" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.686534 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9636bde2ac0001cb1087d339beadf9f5cde94dcccb034a0ea012d3112d49aefc"} err="failed to get container status \"9636bde2ac0001cb1087d339beadf9f5cde94dcccb034a0ea012d3112d49aefc\": rpc error: code = NotFound desc = could not find container \"9636bde2ac0001cb1087d339beadf9f5cde94dcccb034a0ea012d3112d49aefc\": container with ID starting with 9636bde2ac0001cb1087d339beadf9f5cde94dcccb034a0ea012d3112d49aefc not found: ID does not exist" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.733603 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4276f77a-a21c-47f2-9902-e37c3ab865f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4276f77a-a21c-47f2-9902-e37c3ab865f5" (UID: "4276f77a-a21c-47f2-9902-e37c3ab865f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.764451 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d6ths" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.780557 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkjzj\" (UniqueName: \"kubernetes.io/projected/063f39d3-4de8-4c26-99fc-3148a8738541-kube-api-access-jkjzj\") pod \"063f39d3-4de8-4c26-99fc-3148a8738541\" (UID: \"063f39d3-4de8-4c26-99fc-3148a8738541\") " Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.780633 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4ba4d5a4-755b-4d6e-b250-cd705b244775-marketplace-trusted-ca\") pod \"4ba4d5a4-755b-4d6e-b250-cd705b244775\" (UID: \"4ba4d5a4-755b-4d6e-b250-cd705b244775\") " Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.780700 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/063f39d3-4de8-4c26-99fc-3148a8738541-catalog-content\") pod \"063f39d3-4de8-4c26-99fc-3148a8738541\" (UID: \"063f39d3-4de8-4c26-99fc-3148a8738541\") " Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.781023 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhpqv\" (UniqueName: \"kubernetes.io/projected/4ba4d5a4-755b-4d6e-b250-cd705b244775-kube-api-access-mhpqv\") pod \"4ba4d5a4-755b-4d6e-b250-cd705b244775\" (UID: \"4ba4d5a4-755b-4d6e-b250-cd705b244775\") " Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.781072 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4ba4d5a4-755b-4d6e-b250-cd705b244775-tmp\") pod \"4ba4d5a4-755b-4d6e-b250-cd705b244775\" (UID: \"4ba4d5a4-755b-4d6e-b250-cd705b244775\") " Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.781106 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfpdt\" (UniqueName: \"kubernetes.io/projected/20cad200-6605-4fac-b28d-b84cd2d74a89-kube-api-access-kfpdt\") pod \"20cad200-6605-4fac-b28d-b84cd2d74a89\" (UID: \"20cad200-6605-4fac-b28d-b84cd2d74a89\") " Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.781157 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4ba4d5a4-755b-4d6e-b250-cd705b244775-marketplace-operator-metrics\") pod \"4ba4d5a4-755b-4d6e-b250-cd705b244775\" (UID: \"4ba4d5a4-755b-4d6e-b250-cd705b244775\") " Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.781182 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20cad200-6605-4fac-b28d-b84cd2d74a89-catalog-content\") pod \"20cad200-6605-4fac-b28d-b84cd2d74a89\" (UID: \"20cad200-6605-4fac-b28d-b84cd2d74a89\") " Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.781225 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/063f39d3-4de8-4c26-99fc-3148a8738541-utilities\") pod \"063f39d3-4de8-4c26-99fc-3148a8738541\" (UID: \"063f39d3-4de8-4c26-99fc-3148a8738541\") " Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.781245 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20cad200-6605-4fac-b28d-b84cd2d74a89-utilities\") pod \"20cad200-6605-4fac-b28d-b84cd2d74a89\" (UID: \"20cad200-6605-4fac-b28d-b84cd2d74a89\") " Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.781555 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ndpgg\" (UniqueName: \"kubernetes.io/projected/4276f77a-a21c-47f2-9902-e37c3ab865f5-kube-api-access-ndpgg\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.781569 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4276f77a-a21c-47f2-9902-e37c3ab865f5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.781579 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4276f77a-a21c-47f2-9902-e37c3ab865f5-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.782524 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20cad200-6605-4fac-b28d-b84cd2d74a89-utilities" (OuterVolumeSpecName: "utilities") pod "20cad200-6605-4fac-b28d-b84cd2d74a89" (UID: "20cad200-6605-4fac-b28d-b84cd2d74a89"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.782717 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ba4d5a4-755b-4d6e-b250-cd705b244775-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "4ba4d5a4-755b-4d6e-b250-cd705b244775" (UID: "4ba4d5a4-755b-4d6e-b250-cd705b244775"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.783340 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ba4d5a4-755b-4d6e-b250-cd705b244775-tmp" (OuterVolumeSpecName: "tmp") pod "4ba4d5a4-755b-4d6e-b250-cd705b244775" (UID: "4ba4d5a4-755b-4d6e-b250-cd705b244775"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.783821 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/063f39d3-4de8-4c26-99fc-3148a8738541-utilities" (OuterVolumeSpecName: "utilities") pod "063f39d3-4de8-4c26-99fc-3148a8738541" (UID: "063f39d3-4de8-4c26-99fc-3148a8738541"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.789062 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/063f39d3-4de8-4c26-99fc-3148a8738541-kube-api-access-jkjzj" (OuterVolumeSpecName: "kube-api-access-jkjzj") pod "063f39d3-4de8-4c26-99fc-3148a8738541" (UID: "063f39d3-4de8-4c26-99fc-3148a8738541"). InnerVolumeSpecName "kube-api-access-jkjzj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.793446 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20cad200-6605-4fac-b28d-b84cd2d74a89-kube-api-access-kfpdt" (OuterVolumeSpecName: "kube-api-access-kfpdt") pod "20cad200-6605-4fac-b28d-b84cd2d74a89" (UID: "20cad200-6605-4fac-b28d-b84cd2d74a89"). InnerVolumeSpecName "kube-api-access-kfpdt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.794207 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ba4d5a4-755b-4d6e-b250-cd705b244775-kube-api-access-mhpqv" (OuterVolumeSpecName: "kube-api-access-mhpqv") pod "4ba4d5a4-755b-4d6e-b250-cd705b244775" (UID: "4ba4d5a4-755b-4d6e-b250-cd705b244775"). InnerVolumeSpecName "kube-api-access-mhpqv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.796569 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ba4d5a4-755b-4d6e-b250-cd705b244775-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "4ba4d5a4-755b-4d6e-b250-cd705b244775" (UID: "4ba4d5a4-755b-4d6e-b250-cd705b244775"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.801721 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/063f39d3-4de8-4c26-99fc-3148a8738541-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "063f39d3-4de8-4c26-99fc-3148a8738541" (UID: "063f39d3-4de8-4c26-99fc-3148a8738541"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.850778 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20cad200-6605-4fac-b28d-b84cd2d74a89-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20cad200-6605-4fac-b28d-b84cd2d74a89" (UID: "20cad200-6605-4fac-b28d-b84cd2d74a89"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.882115 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/168e3c2a-eb5e-451a-bba9-93e41fb1e958-catalog-content\") pod \"168e3c2a-eb5e-451a-bba9-93e41fb1e958\" (UID: \"168e3c2a-eb5e-451a-bba9-93e41fb1e958\") " Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.882186 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/168e3c2a-eb5e-451a-bba9-93e41fb1e958-utilities\") pod \"168e3c2a-eb5e-451a-bba9-93e41fb1e958\" (UID: \"168e3c2a-eb5e-451a-bba9-93e41fb1e958\") " Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.882241 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bj46\" (UniqueName: \"kubernetes.io/projected/168e3c2a-eb5e-451a-bba9-93e41fb1e958-kube-api-access-6bj46\") pod \"168e3c2a-eb5e-451a-bba9-93e41fb1e958\" (UID: \"168e3c2a-eb5e-451a-bba9-93e41fb1e958\") " Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.882515 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/063f39d3-4de8-4c26-99fc-3148a8738541-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.882527 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mhpqv\" (UniqueName: \"kubernetes.io/projected/4ba4d5a4-755b-4d6e-b250-cd705b244775-kube-api-access-mhpqv\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.882540 5110 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4ba4d5a4-755b-4d6e-b250-cd705b244775-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.882552 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kfpdt\" (UniqueName: \"kubernetes.io/projected/20cad200-6605-4fac-b28d-b84cd2d74a89-kube-api-access-kfpdt\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.882561 5110 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4ba4d5a4-755b-4d6e-b250-cd705b244775-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.882571 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20cad200-6605-4fac-b28d-b84cd2d74a89-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.882579 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/063f39d3-4de8-4c26-99fc-3148a8738541-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.882587 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20cad200-6605-4fac-b28d-b84cd2d74a89-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.882596 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jkjzj\" (UniqueName: \"kubernetes.io/projected/063f39d3-4de8-4c26-99fc-3148a8738541-kube-api-access-jkjzj\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.882606 5110 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4ba4d5a4-755b-4d6e-b250-cd705b244775-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.883889 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/168e3c2a-eb5e-451a-bba9-93e41fb1e958-utilities" (OuterVolumeSpecName: "utilities") pod "168e3c2a-eb5e-451a-bba9-93e41fb1e958" (UID: "168e3c2a-eb5e-451a-bba9-93e41fb1e958"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.886064 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/168e3c2a-eb5e-451a-bba9-93e41fb1e958-kube-api-access-6bj46" (OuterVolumeSpecName: "kube-api-access-6bj46") pod "168e3c2a-eb5e-451a-bba9-93e41fb1e958" (UID: "168e3c2a-eb5e-451a-bba9-93e41fb1e958"). InnerVolumeSpecName "kube-api-access-6bj46". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.908946 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2dlcz"] Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.913826 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2dlcz"] Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.984343 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/168e3c2a-eb5e-451a-bba9-93e41fb1e958-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.984386 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6bj46\" (UniqueName: \"kubernetes.io/projected/168e3c2a-eb5e-451a-bba9-93e41fb1e958-kube-api-access-6bj46\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:01 crc kubenswrapper[5110]: I0126 00:16:01.990893 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/168e3c2a-eb5e-451a-bba9-93e41fb1e958-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "168e3c2a-eb5e-451a-bba9-93e41fb1e958" (UID: "168e3c2a-eb5e-451a-bba9-93e41fb1e958"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:16:02 crc kubenswrapper[5110]: I0126 00:16:02.085589 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-25lrq"] Jan 26 00:16:02 crc kubenswrapper[5110]: I0126 00:16:02.086399 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/168e3c2a-eb5e-451a-bba9-93e41fb1e958-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:02 crc kubenswrapper[5110]: I0126 00:16:02.617306 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5g6gk" Jan 26 00:16:02 crc kubenswrapper[5110]: I0126 00:16:02.617296 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5g6gk" event={"ID":"063f39d3-4de8-4c26-99fc-3148a8738541","Type":"ContainerDied","Data":"d9ecbc99cba0f6ba00965177aa64e3f0d92941a2af113982c8b4a4fb2c6d8d73"} Jan 26 00:16:02 crc kubenswrapper[5110]: I0126 00:16:02.617743 5110 scope.go:117] "RemoveContainer" containerID="612200d4a7f3c5db909d8b17a3551fc7a4bec36c082ea2e609088c07cca8197f" Jan 26 00:16:02 crc kubenswrapper[5110]: I0126 00:16:02.619693 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-25lrq" event={"ID":"a1ada370-69b1-4b43-9a4b-95006bc2f1c7","Type":"ContainerStarted","Data":"217be88452b9f019f9ea6a13e5ca23bac6345b1834305f455b5b3db2519483df"} Jan 26 00:16:02 crc kubenswrapper[5110]: I0126 00:16:02.619745 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-25lrq" event={"ID":"a1ada370-69b1-4b43-9a4b-95006bc2f1c7","Type":"ContainerStarted","Data":"4632d2a182d95f6f25a5caea78b2af0a096d69567e0c2e5749ab93afad549372"} Jan 26 00:16:02 crc kubenswrapper[5110]: I0126 00:16:02.619837 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-25lrq" Jan 26 00:16:02 crc kubenswrapper[5110]: I0126 00:16:02.624165 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-bjwbh" Jan 26 00:16:02 crc kubenswrapper[5110]: I0126 00:16:02.624428 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-bjwbh" event={"ID":"4ba4d5a4-755b-4d6e-b250-cd705b244775","Type":"ContainerDied","Data":"f5fdb38cd83bd1918db414123327b2499349b9dcb1e86f6c1f6eb742753e87ca"} Jan 26 00:16:02 crc kubenswrapper[5110]: I0126 00:16:02.635713 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d6ths" Jan 26 00:16:02 crc kubenswrapper[5110]: I0126 00:16:02.635745 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d6ths" event={"ID":"168e3c2a-eb5e-451a-bba9-93e41fb1e958","Type":"ContainerDied","Data":"dcc05bbda57050aa25fe308b52cb9cb6253ae4d76a0f4da40a76164897a3d634"} Jan 26 00:16:02 crc kubenswrapper[5110]: I0126 00:16:02.638851 5110 scope.go:117] "RemoveContainer" containerID="8242ab45d7f7e5501d6c0e3dce5468384304167e12d1aad8812bdd705b090385" Jan 26 00:16:02 crc kubenswrapper[5110]: I0126 00:16:02.639465 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-25lrq" Jan 26 00:16:02 crc kubenswrapper[5110]: I0126 00:16:02.643657 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2w74" event={"ID":"20cad200-6605-4fac-b28d-b84cd2d74a89","Type":"ContainerDied","Data":"c106de74666cc50a282e74a521bb87502718ee2c81d38e89d01e089e0a13bd46"} Jan 26 00:16:02 crc kubenswrapper[5110]: I0126 00:16:02.643768 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c2w74" Jan 26 00:16:02 crc kubenswrapper[5110]: I0126 00:16:02.652912 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-25lrq" podStartSLOduration=1.6528881420000001 podStartE2EDuration="1.652888142s" podCreationTimestamp="2026-01-26 00:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:16:02.647483684 +0000 UTC m=+419.876382293" watchObservedRunningTime="2026-01-26 00:16:02.652888142 +0000 UTC m=+419.881786751" Jan 26 00:16:02 crc kubenswrapper[5110]: I0126 00:16:02.675281 5110 scope.go:117] "RemoveContainer" containerID="ffd77a85e65077cade19a22d9d8d77ab99c384e08f01758d32ef90ee0db56f01" Jan 26 00:16:02 crc kubenswrapper[5110]: I0126 00:16:02.728462 5110 scope.go:117] "RemoveContainer" containerID="64fc4338a39da7ed8a6a82def166294c46ef7187f1462f5c6f71b68a028d6dc9" Jan 26 00:16:02 crc kubenswrapper[5110]: I0126 00:16:02.739853 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5g6gk"] Jan 26 00:16:02 crc kubenswrapper[5110]: I0126 00:16:02.746616 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5g6gk"] Jan 26 00:16:02 crc kubenswrapper[5110]: I0126 00:16:02.770321 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c2w74"] Jan 26 00:16:02 crc kubenswrapper[5110]: I0126 00:16:02.778114 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c2w74"] Jan 26 00:16:02 crc kubenswrapper[5110]: I0126 00:16:02.779973 5110 scope.go:117] "RemoveContainer" containerID="b5cc787dd637231a5a60a4bbefe8f7a5e7c978f0b3196dbd616e501d6092c251" Jan 26 00:16:02 crc kubenswrapper[5110]: I0126 00:16:02.795610 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-bjwbh"] Jan 26 00:16:02 crc kubenswrapper[5110]: I0126 00:16:02.799348 5110 scope.go:117] "RemoveContainer" containerID="d19b993fcac9451820a0f17a7577c0c857300e04e03c0cd2677397baefdb8506" Jan 26 00:16:02 crc kubenswrapper[5110]: I0126 00:16:02.799970 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-bjwbh"] Jan 26 00:16:02 crc kubenswrapper[5110]: I0126 00:16:02.803509 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d6ths"] Jan 26 00:16:02 crc kubenswrapper[5110]: I0126 00:16:02.806470 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d6ths"] Jan 26 00:16:02 crc kubenswrapper[5110]: I0126 00:16:02.819363 5110 scope.go:117] "RemoveContainer" containerID="dd7529fbb5a18e9bc6a2aa71af30a8d697bf687529849e01d06feeac3220fda4" Jan 26 00:16:02 crc kubenswrapper[5110]: I0126 00:16:02.836168 5110 scope.go:117] "RemoveContainer" containerID="6f53b90ec9de29e7f8617fcaff0bb12a1efcfb7bf0d08d73c5d8f9e78d45a4b7" Jan 26 00:16:02 crc kubenswrapper[5110]: I0126 00:16:02.852672 5110 scope.go:117] "RemoveContainer" containerID="cd979908d80e7df3b36b6ff6105a2713018b397631ee8687effc84d1e5acf853" Jan 26 00:16:02 crc kubenswrapper[5110]: I0126 00:16:02.870989 5110 scope.go:117] "RemoveContainer" containerID="552ca27a832505b24105774c0c07e179c127848d2e354cb133fa690642f7f832" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.299332 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nnd4t"] Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.300121 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="063f39d3-4de8-4c26-99fc-3148a8738541" containerName="extract-utilities" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.300139 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="063f39d3-4de8-4c26-99fc-3148a8738541" containerName="extract-utilities" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.300156 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="168e3c2a-eb5e-451a-bba9-93e41fb1e958" containerName="registry-server" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.300163 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="168e3c2a-eb5e-451a-bba9-93e41fb1e958" containerName="registry-server" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.300174 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4276f77a-a21c-47f2-9902-e37c3ab865f5" containerName="extract-content" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.300181 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="4276f77a-a21c-47f2-9902-e37c3ab865f5" containerName="extract-content" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.300192 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4276f77a-a21c-47f2-9902-e37c3ab865f5" containerName="extract-utilities" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.300198 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="4276f77a-a21c-47f2-9902-e37c3ab865f5" containerName="extract-utilities" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.300207 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="20cad200-6605-4fac-b28d-b84cd2d74a89" containerName="registry-server" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.300213 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="20cad200-6605-4fac-b28d-b84cd2d74a89" containerName="registry-server" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.300219 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="063f39d3-4de8-4c26-99fc-3148a8738541" containerName="extract-content" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.300225 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="063f39d3-4de8-4c26-99fc-3148a8738541" containerName="extract-content" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.300231 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="063f39d3-4de8-4c26-99fc-3148a8738541" containerName="registry-server" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.300237 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="063f39d3-4de8-4c26-99fc-3148a8738541" containerName="registry-server" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.300243 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4ba4d5a4-755b-4d6e-b250-cd705b244775" containerName="marketplace-operator" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.300249 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba4d5a4-755b-4d6e-b250-cd705b244775" containerName="marketplace-operator" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.300264 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="20cad200-6605-4fac-b28d-b84cd2d74a89" containerName="extract-content" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.300270 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="20cad200-6605-4fac-b28d-b84cd2d74a89" containerName="extract-content" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.300285 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="168e3c2a-eb5e-451a-bba9-93e41fb1e958" containerName="extract-utilities" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.300290 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="168e3c2a-eb5e-451a-bba9-93e41fb1e958" containerName="extract-utilities" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.300297 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="168e3c2a-eb5e-451a-bba9-93e41fb1e958" containerName="extract-content" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.300304 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="168e3c2a-eb5e-451a-bba9-93e41fb1e958" containerName="extract-content" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.300312 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="20cad200-6605-4fac-b28d-b84cd2d74a89" containerName="extract-utilities" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.300318 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="20cad200-6605-4fac-b28d-b84cd2d74a89" containerName="extract-utilities" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.300326 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4276f77a-a21c-47f2-9902-e37c3ab865f5" containerName="registry-server" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.300333 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="4276f77a-a21c-47f2-9902-e37c3ab865f5" containerName="registry-server" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.300472 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="4ba4d5a4-755b-4d6e-b250-cd705b244775" containerName="marketplace-operator" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.300483 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="063f39d3-4de8-4c26-99fc-3148a8738541" containerName="registry-server" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.300493 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="4276f77a-a21c-47f2-9902-e37c3ab865f5" containerName="registry-server" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.300501 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="20cad200-6605-4fac-b28d-b84cd2d74a89" containerName="registry-server" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.300512 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="168e3c2a-eb5e-451a-bba9-93e41fb1e958" containerName="registry-server" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.328610 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nnd4t" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.329441 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="063f39d3-4de8-4c26-99fc-3148a8738541" path="/var/lib/kubelet/pods/063f39d3-4de8-4c26-99fc-3148a8738541/volumes" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.330362 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="168e3c2a-eb5e-451a-bba9-93e41fb1e958" path="/var/lib/kubelet/pods/168e3c2a-eb5e-451a-bba9-93e41fb1e958/volumes" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.331329 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20cad200-6605-4fac-b28d-b84cd2d74a89" path="/var/lib/kubelet/pods/20cad200-6605-4fac-b28d-b84cd2d74a89/volumes" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.331959 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.333346 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4276f77a-a21c-47f2-9902-e37c3ab865f5" path="/var/lib/kubelet/pods/4276f77a-a21c-47f2-9902-e37c3ab865f5/volumes" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.334307 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ba4d5a4-755b-4d6e-b250-cd705b244775" path="/var/lib/kubelet/pods/4ba4d5a4-755b-4d6e-b250-cd705b244775/volumes" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.335705 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nnd4t"] Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.412759 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/304af444-ca8b-464c-a6dd-e4aca996cb53-utilities\") pod \"redhat-marketplace-nnd4t\" (UID: \"304af444-ca8b-464c-a6dd-e4aca996cb53\") " pod="openshift-marketplace/redhat-marketplace-nnd4t" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.413127 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/304af444-ca8b-464c-a6dd-e4aca996cb53-catalog-content\") pod \"redhat-marketplace-nnd4t\" (UID: \"304af444-ca8b-464c-a6dd-e4aca996cb53\") " pod="openshift-marketplace/redhat-marketplace-nnd4t" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.413236 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t66rb\" (UniqueName: \"kubernetes.io/projected/304af444-ca8b-464c-a6dd-e4aca996cb53-kube-api-access-t66rb\") pod \"redhat-marketplace-nnd4t\" (UID: \"304af444-ca8b-464c-a6dd-e4aca996cb53\") " pod="openshift-marketplace/redhat-marketplace-nnd4t" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.501927 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2zd9j"] Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.509070 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2zd9j" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.512247 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.515269 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/304af444-ca8b-464c-a6dd-e4aca996cb53-catalog-content\") pod \"redhat-marketplace-nnd4t\" (UID: \"304af444-ca8b-464c-a6dd-e4aca996cb53\") " pod="openshift-marketplace/redhat-marketplace-nnd4t" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.515324 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t66rb\" (UniqueName: \"kubernetes.io/projected/304af444-ca8b-464c-a6dd-e4aca996cb53-kube-api-access-t66rb\") pod \"redhat-marketplace-nnd4t\" (UID: \"304af444-ca8b-464c-a6dd-e4aca996cb53\") " pod="openshift-marketplace/redhat-marketplace-nnd4t" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.515373 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/304af444-ca8b-464c-a6dd-e4aca996cb53-utilities\") pod \"redhat-marketplace-nnd4t\" (UID: \"304af444-ca8b-464c-a6dd-e4aca996cb53\") " pod="openshift-marketplace/redhat-marketplace-nnd4t" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.516062 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/304af444-ca8b-464c-a6dd-e4aca996cb53-utilities\") pod \"redhat-marketplace-nnd4t\" (UID: \"304af444-ca8b-464c-a6dd-e4aca996cb53\") " pod="openshift-marketplace/redhat-marketplace-nnd4t" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.516182 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/304af444-ca8b-464c-a6dd-e4aca996cb53-catalog-content\") pod \"redhat-marketplace-nnd4t\" (UID: \"304af444-ca8b-464c-a6dd-e4aca996cb53\") " pod="openshift-marketplace/redhat-marketplace-nnd4t" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.521441 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2zd9j"] Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.540457 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t66rb\" (UniqueName: \"kubernetes.io/projected/304af444-ca8b-464c-a6dd-e4aca996cb53-kube-api-access-t66rb\") pod \"redhat-marketplace-nnd4t\" (UID: \"304af444-ca8b-464c-a6dd-e4aca996cb53\") " pod="openshift-marketplace/redhat-marketplace-nnd4t" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.616561 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2268b8e4-3c7b-4c92-9430-4e4514b2f3c1-utilities\") pod \"certified-operators-2zd9j\" (UID: \"2268b8e4-3c7b-4c92-9430-4e4514b2f3c1\") " pod="openshift-marketplace/certified-operators-2zd9j" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.616644 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2268b8e4-3c7b-4c92-9430-4e4514b2f3c1-catalog-content\") pod \"certified-operators-2zd9j\" (UID: \"2268b8e4-3c7b-4c92-9430-4e4514b2f3c1\") " pod="openshift-marketplace/certified-operators-2zd9j" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.616908 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwrwl\" (UniqueName: \"kubernetes.io/projected/2268b8e4-3c7b-4c92-9430-4e4514b2f3c1-kube-api-access-vwrwl\") pod \"certified-operators-2zd9j\" (UID: \"2268b8e4-3c7b-4c92-9430-4e4514b2f3c1\") " pod="openshift-marketplace/certified-operators-2zd9j" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.654526 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nnd4t" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.718217 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vwrwl\" (UniqueName: \"kubernetes.io/projected/2268b8e4-3c7b-4c92-9430-4e4514b2f3c1-kube-api-access-vwrwl\") pod \"certified-operators-2zd9j\" (UID: \"2268b8e4-3c7b-4c92-9430-4e4514b2f3c1\") " pod="openshift-marketplace/certified-operators-2zd9j" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.718276 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2268b8e4-3c7b-4c92-9430-4e4514b2f3c1-utilities\") pod \"certified-operators-2zd9j\" (UID: \"2268b8e4-3c7b-4c92-9430-4e4514b2f3c1\") " pod="openshift-marketplace/certified-operators-2zd9j" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.718321 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2268b8e4-3c7b-4c92-9430-4e4514b2f3c1-catalog-content\") pod \"certified-operators-2zd9j\" (UID: \"2268b8e4-3c7b-4c92-9430-4e4514b2f3c1\") " pod="openshift-marketplace/certified-operators-2zd9j" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.718941 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2268b8e4-3c7b-4c92-9430-4e4514b2f3c1-catalog-content\") pod \"certified-operators-2zd9j\" (UID: \"2268b8e4-3c7b-4c92-9430-4e4514b2f3c1\") " pod="openshift-marketplace/certified-operators-2zd9j" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.719022 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2268b8e4-3c7b-4c92-9430-4e4514b2f3c1-utilities\") pod \"certified-operators-2zd9j\" (UID: \"2268b8e4-3c7b-4c92-9430-4e4514b2f3c1\") " pod="openshift-marketplace/certified-operators-2zd9j" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.745724 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwrwl\" (UniqueName: \"kubernetes.io/projected/2268b8e4-3c7b-4c92-9430-4e4514b2f3c1-kube-api-access-vwrwl\") pod \"certified-operators-2zd9j\" (UID: \"2268b8e4-3c7b-4c92-9430-4e4514b2f3c1\") " pod="openshift-marketplace/certified-operators-2zd9j" Jan 26 00:16:03 crc kubenswrapper[5110]: I0126 00:16:03.894974 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2zd9j" Jan 26 00:16:04 crc kubenswrapper[5110]: I0126 00:16:04.101311 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nnd4t"] Jan 26 00:16:04 crc kubenswrapper[5110]: I0126 00:16:04.313507 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2zd9j"] Jan 26 00:16:04 crc kubenswrapper[5110]: W0126 00:16:04.353886 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2268b8e4_3c7b_4c92_9430_4e4514b2f3c1.slice/crio-b46516215a204c0bea65fe602b5678534476aa27a1d33695f2c5e4e4aa00661c WatchSource:0}: Error finding container b46516215a204c0bea65fe602b5678534476aa27a1d33695f2c5e4e4aa00661c: Status 404 returned error can't find the container with id b46516215a204c0bea65fe602b5678534476aa27a1d33695f2c5e4e4aa00661c Jan 26 00:16:04 crc kubenswrapper[5110]: I0126 00:16:04.667786 5110 generic.go:358] "Generic (PLEG): container finished" podID="304af444-ca8b-464c-a6dd-e4aca996cb53" containerID="b0965f37ea7402dd449fa827f85f1ab536649f82ec63fbb0cd59dff9f3062b34" exitCode=0 Jan 26 00:16:04 crc kubenswrapper[5110]: I0126 00:16:04.668118 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nnd4t" event={"ID":"304af444-ca8b-464c-a6dd-e4aca996cb53","Type":"ContainerDied","Data":"b0965f37ea7402dd449fa827f85f1ab536649f82ec63fbb0cd59dff9f3062b34"} Jan 26 00:16:04 crc kubenswrapper[5110]: I0126 00:16:04.668181 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nnd4t" event={"ID":"304af444-ca8b-464c-a6dd-e4aca996cb53","Type":"ContainerStarted","Data":"75ee5b3d59ddf192a92c6165d0a0562bebe8b54092296a81da11fc98f0c8d20d"} Jan 26 00:16:04 crc kubenswrapper[5110]: I0126 00:16:04.669701 5110 generic.go:358] "Generic (PLEG): container finished" podID="2268b8e4-3c7b-4c92-9430-4e4514b2f3c1" containerID="13edacaceac2b2c4f107fadf46f0d5847fdb466a47f9b1d441d1f6be46ddff98" exitCode=0 Jan 26 00:16:04 crc kubenswrapper[5110]: I0126 00:16:04.669897 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2zd9j" event={"ID":"2268b8e4-3c7b-4c92-9430-4e4514b2f3c1","Type":"ContainerDied","Data":"13edacaceac2b2c4f107fadf46f0d5847fdb466a47f9b1d441d1f6be46ddff98"} Jan 26 00:16:04 crc kubenswrapper[5110]: I0126 00:16:04.672840 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2zd9j" event={"ID":"2268b8e4-3c7b-4c92-9430-4e4514b2f3c1","Type":"ContainerStarted","Data":"b46516215a204c0bea65fe602b5678534476aa27a1d33695f2c5e4e4aa00661c"} Jan 26 00:16:05 crc kubenswrapper[5110]: I0126 00:16:05.679022 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nnd4t" event={"ID":"304af444-ca8b-464c-a6dd-e4aca996cb53","Type":"ContainerStarted","Data":"d6a90c098a2b93f956bb0f46458c913d988e2ad17baf4714c12c288e5080bc66"} Jan 26 00:16:05 crc kubenswrapper[5110]: I0126 00:16:05.683852 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2zd9j" event={"ID":"2268b8e4-3c7b-4c92-9430-4e4514b2f3c1","Type":"ContainerStarted","Data":"0f8b2de987bacbf602ef9699c4fcf36c1a652b2f89d9e7ff273f3b81807c6d91"} Jan 26 00:16:05 crc kubenswrapper[5110]: I0126 00:16:05.692919 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2984n"] Jan 26 00:16:05 crc kubenswrapper[5110]: I0126 00:16:05.704871 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2984n" Jan 26 00:16:05 crc kubenswrapper[5110]: I0126 00:16:05.707282 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 26 00:16:05 crc kubenswrapper[5110]: I0126 00:16:05.709870 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2984n"] Jan 26 00:16:05 crc kubenswrapper[5110]: I0126 00:16:05.850313 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50db59ae-4cff-430a-875e-9d7310641e25-utilities\") pod \"community-operators-2984n\" (UID: \"50db59ae-4cff-430a-875e-9d7310641e25\") " pod="openshift-marketplace/community-operators-2984n" Jan 26 00:16:05 crc kubenswrapper[5110]: I0126 00:16:05.850510 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50db59ae-4cff-430a-875e-9d7310641e25-catalog-content\") pod \"community-operators-2984n\" (UID: \"50db59ae-4cff-430a-875e-9d7310641e25\") " pod="openshift-marketplace/community-operators-2984n" Jan 26 00:16:05 crc kubenswrapper[5110]: I0126 00:16:05.850591 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fwq8\" (UniqueName: \"kubernetes.io/projected/50db59ae-4cff-430a-875e-9d7310641e25-kube-api-access-4fwq8\") pod \"community-operators-2984n\" (UID: \"50db59ae-4cff-430a-875e-9d7310641e25\") " pod="openshift-marketplace/community-operators-2984n" Jan 26 00:16:05 crc kubenswrapper[5110]: I0126 00:16:05.887181 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zwn7c"] Jan 26 00:16:05 crc kubenswrapper[5110]: I0126 00:16:05.893334 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zwn7c" Jan 26 00:16:05 crc kubenswrapper[5110]: I0126 00:16:05.896651 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 26 00:16:05 crc kubenswrapper[5110]: I0126 00:16:05.906523 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zwn7c"] Jan 26 00:16:05 crc kubenswrapper[5110]: I0126 00:16:05.952193 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50db59ae-4cff-430a-875e-9d7310641e25-catalog-content\") pod \"community-operators-2984n\" (UID: \"50db59ae-4cff-430a-875e-9d7310641e25\") " pod="openshift-marketplace/community-operators-2984n" Jan 26 00:16:05 crc kubenswrapper[5110]: I0126 00:16:05.952281 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4fwq8\" (UniqueName: \"kubernetes.io/projected/50db59ae-4cff-430a-875e-9d7310641e25-kube-api-access-4fwq8\") pod \"community-operators-2984n\" (UID: \"50db59ae-4cff-430a-875e-9d7310641e25\") " pod="openshift-marketplace/community-operators-2984n" Jan 26 00:16:05 crc kubenswrapper[5110]: I0126 00:16:05.952345 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50db59ae-4cff-430a-875e-9d7310641e25-utilities\") pod \"community-operators-2984n\" (UID: \"50db59ae-4cff-430a-875e-9d7310641e25\") " pod="openshift-marketplace/community-operators-2984n" Jan 26 00:16:05 crc kubenswrapper[5110]: I0126 00:16:05.952780 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50db59ae-4cff-430a-875e-9d7310641e25-catalog-content\") pod \"community-operators-2984n\" (UID: \"50db59ae-4cff-430a-875e-9d7310641e25\") " pod="openshift-marketplace/community-operators-2984n" Jan 26 00:16:05 crc kubenswrapper[5110]: I0126 00:16:05.952834 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50db59ae-4cff-430a-875e-9d7310641e25-utilities\") pod \"community-operators-2984n\" (UID: \"50db59ae-4cff-430a-875e-9d7310641e25\") " pod="openshift-marketplace/community-operators-2984n" Jan 26 00:16:05 crc kubenswrapper[5110]: I0126 00:16:05.976947 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fwq8\" (UniqueName: \"kubernetes.io/projected/50db59ae-4cff-430a-875e-9d7310641e25-kube-api-access-4fwq8\") pod \"community-operators-2984n\" (UID: \"50db59ae-4cff-430a-875e-9d7310641e25\") " pod="openshift-marketplace/community-operators-2984n" Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.023212 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2984n" Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.053827 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b9d67c0-e0dd-4db3-85d5-958a460183a3-utilities\") pod \"redhat-operators-zwn7c\" (UID: \"9b9d67c0-e0dd-4db3-85d5-958a460183a3\") " pod="openshift-marketplace/redhat-operators-zwn7c" Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.053930 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b9d67c0-e0dd-4db3-85d5-958a460183a3-catalog-content\") pod \"redhat-operators-zwn7c\" (UID: \"9b9d67c0-e0dd-4db3-85d5-958a460183a3\") " pod="openshift-marketplace/redhat-operators-zwn7c" Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.054025 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5zd5\" (UniqueName: \"kubernetes.io/projected/9b9d67c0-e0dd-4db3-85d5-958a460183a3-kube-api-access-k5zd5\") pod \"redhat-operators-zwn7c\" (UID: \"9b9d67c0-e0dd-4db3-85d5-958a460183a3\") " pod="openshift-marketplace/redhat-operators-zwn7c" Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.154684 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k5zd5\" (UniqueName: \"kubernetes.io/projected/9b9d67c0-e0dd-4db3-85d5-958a460183a3-kube-api-access-k5zd5\") pod \"redhat-operators-zwn7c\" (UID: \"9b9d67c0-e0dd-4db3-85d5-958a460183a3\") " pod="openshift-marketplace/redhat-operators-zwn7c" Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.155102 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b9d67c0-e0dd-4db3-85d5-958a460183a3-utilities\") pod \"redhat-operators-zwn7c\" (UID: \"9b9d67c0-e0dd-4db3-85d5-958a460183a3\") " pod="openshift-marketplace/redhat-operators-zwn7c" Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.155142 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b9d67c0-e0dd-4db3-85d5-958a460183a3-catalog-content\") pod \"redhat-operators-zwn7c\" (UID: \"9b9d67c0-e0dd-4db3-85d5-958a460183a3\") " pod="openshift-marketplace/redhat-operators-zwn7c" Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.155763 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b9d67c0-e0dd-4db3-85d5-958a460183a3-catalog-content\") pod \"redhat-operators-zwn7c\" (UID: \"9b9d67c0-e0dd-4db3-85d5-958a460183a3\") " pod="openshift-marketplace/redhat-operators-zwn7c" Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.155862 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b9d67c0-e0dd-4db3-85d5-958a460183a3-utilities\") pod \"redhat-operators-zwn7c\" (UID: \"9b9d67c0-e0dd-4db3-85d5-958a460183a3\") " pod="openshift-marketplace/redhat-operators-zwn7c" Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.174642 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5zd5\" (UniqueName: \"kubernetes.io/projected/9b9d67c0-e0dd-4db3-85d5-958a460183a3-kube-api-access-k5zd5\") pod \"redhat-operators-zwn7c\" (UID: \"9b9d67c0-e0dd-4db3-85d5-958a460183a3\") " pod="openshift-marketplace/redhat-operators-zwn7c" Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.208786 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zwn7c" Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.469884 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2984n"] Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.542211 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-g8dg4"] Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.553228 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-g8dg4" Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.554385 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-g8dg4"] Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.665425 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-g8dg4\" (UID: \"d07bce38-70bd-493e-b28e-981dbca04e1f\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8dg4" Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.666335 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d07bce38-70bd-493e-b28e-981dbca04e1f-registry-certificates\") pod \"image-registry-5d9d95bf5b-g8dg4\" (UID: \"d07bce38-70bd-493e-b28e-981dbca04e1f\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8dg4" Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.666740 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d07bce38-70bd-493e-b28e-981dbca04e1f-trusted-ca\") pod \"image-registry-5d9d95bf5b-g8dg4\" (UID: \"d07bce38-70bd-493e-b28e-981dbca04e1f\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8dg4" Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.667177 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d07bce38-70bd-493e-b28e-981dbca04e1f-registry-tls\") pod \"image-registry-5d9d95bf5b-g8dg4\" (UID: \"d07bce38-70bd-493e-b28e-981dbca04e1f\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8dg4" Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.667339 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d07bce38-70bd-493e-b28e-981dbca04e1f-bound-sa-token\") pod \"image-registry-5d9d95bf5b-g8dg4\" (UID: \"d07bce38-70bd-493e-b28e-981dbca04e1f\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8dg4" Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.667575 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9bk2\" (UniqueName: \"kubernetes.io/projected/d07bce38-70bd-493e-b28e-981dbca04e1f-kube-api-access-r9bk2\") pod \"image-registry-5d9d95bf5b-g8dg4\" (UID: \"d07bce38-70bd-493e-b28e-981dbca04e1f\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8dg4" Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.667711 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d07bce38-70bd-493e-b28e-981dbca04e1f-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-g8dg4\" (UID: \"d07bce38-70bd-493e-b28e-981dbca04e1f\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8dg4" Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.667851 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d07bce38-70bd-493e-b28e-981dbca04e1f-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-g8dg4\" (UID: \"d07bce38-70bd-493e-b28e-981dbca04e1f\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8dg4" Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.680002 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zwn7c"] Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.704158 5110 generic.go:358] "Generic (PLEG): container finished" podID="2268b8e4-3c7b-4c92-9430-4e4514b2f3c1" containerID="0f8b2de987bacbf602ef9699c4fcf36c1a652b2f89d9e7ff273f3b81807c6d91" exitCode=0 Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.704393 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2zd9j" event={"ID":"2268b8e4-3c7b-4c92-9430-4e4514b2f3c1","Type":"ContainerDied","Data":"0f8b2de987bacbf602ef9699c4fcf36c1a652b2f89d9e7ff273f3b81807c6d91"} Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.709288 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2984n" event={"ID":"50db59ae-4cff-430a-875e-9d7310641e25","Type":"ContainerStarted","Data":"4d7ac0173cf1a7eb6501cdeeac7cfb7389174055a36a6abb337cdc47f7f92af5"} Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.733484 5110 generic.go:358] "Generic (PLEG): container finished" podID="304af444-ca8b-464c-a6dd-e4aca996cb53" containerID="d6a90c098a2b93f956bb0f46458c913d988e2ad17baf4714c12c288e5080bc66" exitCode=0 Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.733948 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nnd4t" event={"ID":"304af444-ca8b-464c-a6dd-e4aca996cb53","Type":"ContainerDied","Data":"d6a90c098a2b93f956bb0f46458c913d988e2ad17baf4714c12c288e5080bc66"} Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.772564 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r9bk2\" (UniqueName: \"kubernetes.io/projected/d07bce38-70bd-493e-b28e-981dbca04e1f-kube-api-access-r9bk2\") pod \"image-registry-5d9d95bf5b-g8dg4\" (UID: \"d07bce38-70bd-493e-b28e-981dbca04e1f\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8dg4" Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.773132 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d07bce38-70bd-493e-b28e-981dbca04e1f-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-g8dg4\" (UID: \"d07bce38-70bd-493e-b28e-981dbca04e1f\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8dg4" Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.773157 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d07bce38-70bd-493e-b28e-981dbca04e1f-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-g8dg4\" (UID: \"d07bce38-70bd-493e-b28e-981dbca04e1f\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8dg4" Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.773210 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d07bce38-70bd-493e-b28e-981dbca04e1f-registry-certificates\") pod \"image-registry-5d9d95bf5b-g8dg4\" (UID: \"d07bce38-70bd-493e-b28e-981dbca04e1f\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8dg4" Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.773233 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d07bce38-70bd-493e-b28e-981dbca04e1f-trusted-ca\") pod \"image-registry-5d9d95bf5b-g8dg4\" (UID: \"d07bce38-70bd-493e-b28e-981dbca04e1f\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8dg4" Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.773303 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d07bce38-70bd-493e-b28e-981dbca04e1f-registry-tls\") pod \"image-registry-5d9d95bf5b-g8dg4\" (UID: \"d07bce38-70bd-493e-b28e-981dbca04e1f\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8dg4" Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.773337 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d07bce38-70bd-493e-b28e-981dbca04e1f-bound-sa-token\") pod \"image-registry-5d9d95bf5b-g8dg4\" (UID: \"d07bce38-70bd-493e-b28e-981dbca04e1f\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8dg4" Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.774307 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d07bce38-70bd-493e-b28e-981dbca04e1f-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-g8dg4\" (UID: \"d07bce38-70bd-493e-b28e-981dbca04e1f\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8dg4" Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.775538 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d07bce38-70bd-493e-b28e-981dbca04e1f-registry-certificates\") pod \"image-registry-5d9d95bf5b-g8dg4\" (UID: \"d07bce38-70bd-493e-b28e-981dbca04e1f\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8dg4" Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.779784 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d07bce38-70bd-493e-b28e-981dbca04e1f-trusted-ca\") pod \"image-registry-5d9d95bf5b-g8dg4\" (UID: \"d07bce38-70bd-493e-b28e-981dbca04e1f\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8dg4" Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.784632 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d07bce38-70bd-493e-b28e-981dbca04e1f-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-g8dg4\" (UID: \"d07bce38-70bd-493e-b28e-981dbca04e1f\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8dg4" Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.792121 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-g8dg4\" (UID: \"d07bce38-70bd-493e-b28e-981dbca04e1f\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8dg4" Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.794215 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d07bce38-70bd-493e-b28e-981dbca04e1f-registry-tls\") pod \"image-registry-5d9d95bf5b-g8dg4\" (UID: \"d07bce38-70bd-493e-b28e-981dbca04e1f\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8dg4" Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.807883 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d07bce38-70bd-493e-b28e-981dbca04e1f-bound-sa-token\") pod \"image-registry-5d9d95bf5b-g8dg4\" (UID: \"d07bce38-70bd-493e-b28e-981dbca04e1f\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8dg4" Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.808750 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9bk2\" (UniqueName: \"kubernetes.io/projected/d07bce38-70bd-493e-b28e-981dbca04e1f-kube-api-access-r9bk2\") pod \"image-registry-5d9d95bf5b-g8dg4\" (UID: \"d07bce38-70bd-493e-b28e-981dbca04e1f\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-g8dg4" Jan 26 00:16:06 crc kubenswrapper[5110]: I0126 00:16:06.870530 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-g8dg4" Jan 26 00:16:07 crc kubenswrapper[5110]: I0126 00:16:07.315972 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-g8dg4"] Jan 26 00:16:07 crc kubenswrapper[5110]: W0126 00:16:07.322932 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd07bce38_70bd_493e_b28e_981dbca04e1f.slice/crio-fb60b36e82d00a19e4eae856d4597d07992d658fd235a394b3285ccc92287cf3 WatchSource:0}: Error finding container fb60b36e82d00a19e4eae856d4597d07992d658fd235a394b3285ccc92287cf3: Status 404 returned error can't find the container with id fb60b36e82d00a19e4eae856d4597d07992d658fd235a394b3285ccc92287cf3 Jan 26 00:16:07 crc kubenswrapper[5110]: I0126 00:16:07.743401 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nnd4t" event={"ID":"304af444-ca8b-464c-a6dd-e4aca996cb53","Type":"ContainerStarted","Data":"9199986e21190916de438833427ee11059dd0b2dbaadc6131aba4914f2788d37"} Jan 26 00:16:07 crc kubenswrapper[5110]: I0126 00:16:07.748851 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2zd9j" event={"ID":"2268b8e4-3c7b-4c92-9430-4e4514b2f3c1","Type":"ContainerStarted","Data":"813429339e0e7bb59c36673be28727cc2df14a353e64cac6380703a0d642851b"} Jan 26 00:16:07 crc kubenswrapper[5110]: I0126 00:16:07.752311 5110 generic.go:358] "Generic (PLEG): container finished" podID="50db59ae-4cff-430a-875e-9d7310641e25" containerID="9fb4065c6405e77c01d9fc2d98e1873458ad93b289f2836af023466ed681038e" exitCode=0 Jan 26 00:16:07 crc kubenswrapper[5110]: I0126 00:16:07.752425 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2984n" event={"ID":"50db59ae-4cff-430a-875e-9d7310641e25","Type":"ContainerDied","Data":"9fb4065c6405e77c01d9fc2d98e1873458ad93b289f2836af023466ed681038e"} Jan 26 00:16:07 crc kubenswrapper[5110]: I0126 00:16:07.756552 5110 generic.go:358] "Generic (PLEG): container finished" podID="9b9d67c0-e0dd-4db3-85d5-958a460183a3" containerID="f7ef5f03b29988038b5e46e3ed9c9b386bada55c08f7de0e1abefb5be54c4f41" exitCode=0 Jan 26 00:16:07 crc kubenswrapper[5110]: I0126 00:16:07.756650 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwn7c" event={"ID":"9b9d67c0-e0dd-4db3-85d5-958a460183a3","Type":"ContainerDied","Data":"f7ef5f03b29988038b5e46e3ed9c9b386bada55c08f7de0e1abefb5be54c4f41"} Jan 26 00:16:07 crc kubenswrapper[5110]: I0126 00:16:07.756678 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwn7c" event={"ID":"9b9d67c0-e0dd-4db3-85d5-958a460183a3","Type":"ContainerStarted","Data":"349d053e773ff26ef1dc4e5f95ff5ce1ec7135a774735fe3c9adf1e631855e1e"} Jan 26 00:16:07 crc kubenswrapper[5110]: I0126 00:16:07.758234 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-g8dg4" event={"ID":"d07bce38-70bd-493e-b28e-981dbca04e1f","Type":"ContainerStarted","Data":"c0135ef0603ea9c3ac385de9b83b769840cc3a826534b0d40855828cdd4a8255"} Jan 26 00:16:07 crc kubenswrapper[5110]: I0126 00:16:07.758306 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-g8dg4" event={"ID":"d07bce38-70bd-493e-b28e-981dbca04e1f","Type":"ContainerStarted","Data":"fb60b36e82d00a19e4eae856d4597d07992d658fd235a394b3285ccc92287cf3"} Jan 26 00:16:07 crc kubenswrapper[5110]: I0126 00:16:07.758814 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-g8dg4" Jan 26 00:16:07 crc kubenswrapper[5110]: I0126 00:16:07.790641 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nnd4t" podStartSLOduration=3.990493339 podStartE2EDuration="4.790623632s" podCreationTimestamp="2026-01-26 00:16:03 +0000 UTC" firstStartedPulling="2026-01-26 00:16:04.66896085 +0000 UTC m=+421.897859449" lastFinishedPulling="2026-01-26 00:16:05.469091133 +0000 UTC m=+422.697989742" observedRunningTime="2026-01-26 00:16:07.772172822 +0000 UTC m=+425.001071451" watchObservedRunningTime="2026-01-26 00:16:07.790623632 +0000 UTC m=+425.019522241" Jan 26 00:16:07 crc kubenswrapper[5110]: I0126 00:16:07.793055 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2zd9j" podStartSLOduration=4.08413052 podStartE2EDuration="4.793048323s" podCreationTimestamp="2026-01-26 00:16:03 +0000 UTC" firstStartedPulling="2026-01-26 00:16:04.670387401 +0000 UTC m=+421.899286010" lastFinishedPulling="2026-01-26 00:16:05.379305184 +0000 UTC m=+422.608203813" observedRunningTime="2026-01-26 00:16:07.790097577 +0000 UTC m=+425.018996186" watchObservedRunningTime="2026-01-26 00:16:07.793048323 +0000 UTC m=+425.021946932" Jan 26 00:16:07 crc kubenswrapper[5110]: I0126 00:16:07.852624 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-g8dg4" podStartSLOduration=1.852606537 podStartE2EDuration="1.852606537s" podCreationTimestamp="2026-01-26 00:16:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:16:07.851423542 +0000 UTC m=+425.080322151" watchObservedRunningTime="2026-01-26 00:16:07.852606537 +0000 UTC m=+425.081505136" Jan 26 00:16:08 crc kubenswrapper[5110]: I0126 00:16:08.776470 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2984n" event={"ID":"50db59ae-4cff-430a-875e-9d7310641e25","Type":"ContainerStarted","Data":"093c181f392b70fbf886db5e14c809f0de9bd4cb6c97c3b07fc25aaaf7d0ba27"} Jan 26 00:16:08 crc kubenswrapper[5110]: I0126 00:16:08.781314 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwn7c" event={"ID":"9b9d67c0-e0dd-4db3-85d5-958a460183a3","Type":"ContainerStarted","Data":"55c3d78d89d0de6e574c42d29a5daf3a0297e340c5c6b3ed152475764bd58428"} Jan 26 00:16:09 crc kubenswrapper[5110]: I0126 00:16:09.789849 5110 generic.go:358] "Generic (PLEG): container finished" podID="50db59ae-4cff-430a-875e-9d7310641e25" containerID="093c181f392b70fbf886db5e14c809f0de9bd4cb6c97c3b07fc25aaaf7d0ba27" exitCode=0 Jan 26 00:16:09 crc kubenswrapper[5110]: I0126 00:16:09.789972 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2984n" event={"ID":"50db59ae-4cff-430a-875e-9d7310641e25","Type":"ContainerDied","Data":"093c181f392b70fbf886db5e14c809f0de9bd4cb6c97c3b07fc25aaaf7d0ba27"} Jan 26 00:16:09 crc kubenswrapper[5110]: I0126 00:16:09.792987 5110 generic.go:358] "Generic (PLEG): container finished" podID="9b9d67c0-e0dd-4db3-85d5-958a460183a3" containerID="55c3d78d89d0de6e574c42d29a5daf3a0297e340c5c6b3ed152475764bd58428" exitCode=0 Jan 26 00:16:09 crc kubenswrapper[5110]: I0126 00:16:09.793314 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwn7c" event={"ID":"9b9d67c0-e0dd-4db3-85d5-958a460183a3","Type":"ContainerDied","Data":"55c3d78d89d0de6e574c42d29a5daf3a0297e340c5c6b3ed152475764bd58428"} Jan 26 00:16:10 crc kubenswrapper[5110]: I0126 00:16:10.803567 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2984n" event={"ID":"50db59ae-4cff-430a-875e-9d7310641e25","Type":"ContainerStarted","Data":"b9c1914fc9f8d853843a44cc393dea0f8e83893f83ea1009eaec57b9655729a2"} Jan 26 00:16:10 crc kubenswrapper[5110]: I0126 00:16:10.807639 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwn7c" event={"ID":"9b9d67c0-e0dd-4db3-85d5-958a460183a3","Type":"ContainerStarted","Data":"3178d9295b4a95873f1d04dcc8fba5885df89ef4cc060144b3c8465b5a6d7866"} Jan 26 00:16:10 crc kubenswrapper[5110]: I0126 00:16:10.825996 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2984n" podStartSLOduration=5.159159628 podStartE2EDuration="5.825971849s" podCreationTimestamp="2026-01-26 00:16:05 +0000 UTC" firstStartedPulling="2026-01-26 00:16:07.753285479 +0000 UTC m=+424.982184088" lastFinishedPulling="2026-01-26 00:16:08.42009769 +0000 UTC m=+425.648996309" observedRunningTime="2026-01-26 00:16:10.823429495 +0000 UTC m=+428.052328114" watchObservedRunningTime="2026-01-26 00:16:10.825971849 +0000 UTC m=+428.054870448" Jan 26 00:16:10 crc kubenswrapper[5110]: I0126 00:16:10.845642 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zwn7c" podStartSLOduration=5.096816463 podStartE2EDuration="5.845613804s" podCreationTimestamp="2026-01-26 00:16:05 +0000 UTC" firstStartedPulling="2026-01-26 00:16:07.758373238 +0000 UTC m=+424.987271847" lastFinishedPulling="2026-01-26 00:16:08.507170579 +0000 UTC m=+425.736069188" observedRunningTime="2026-01-26 00:16:10.841269347 +0000 UTC m=+428.070167986" watchObservedRunningTime="2026-01-26 00:16:10.845613804 +0000 UTC m=+428.074512413" Jan 26 00:16:13 crc kubenswrapper[5110]: I0126 00:16:13.655194 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-nnd4t" Jan 26 00:16:13 crc kubenswrapper[5110]: I0126 00:16:13.656073 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nnd4t" Jan 26 00:16:13 crc kubenswrapper[5110]: I0126 00:16:13.711502 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nnd4t" Jan 26 00:16:13 crc kubenswrapper[5110]: I0126 00:16:13.872756 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nnd4t" Jan 26 00:16:13 crc kubenswrapper[5110]: I0126 00:16:13.898028 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-2zd9j" Jan 26 00:16:13 crc kubenswrapper[5110]: I0126 00:16:13.899088 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2zd9j" Jan 26 00:16:13 crc kubenswrapper[5110]: I0126 00:16:13.941664 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2zd9j" Jan 26 00:16:14 crc kubenswrapper[5110]: I0126 00:16:14.879286 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2zd9j" Jan 26 00:16:16 crc kubenswrapper[5110]: I0126 00:16:16.024134 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2984n" Jan 26 00:16:16 crc kubenswrapper[5110]: I0126 00:16:16.025265 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-2984n" Jan 26 00:16:16 crc kubenswrapper[5110]: I0126 00:16:16.091020 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2984n" Jan 26 00:16:16 crc kubenswrapper[5110]: I0126 00:16:16.209878 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zwn7c" Jan 26 00:16:16 crc kubenswrapper[5110]: I0126 00:16:16.210251 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-zwn7c" Jan 26 00:16:16 crc kubenswrapper[5110]: I0126 00:16:16.252673 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zwn7c" Jan 26 00:16:16 crc kubenswrapper[5110]: I0126 00:16:16.916069 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zwn7c" Jan 26 00:16:16 crc kubenswrapper[5110]: I0126 00:16:16.919021 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2984n" Jan 26 00:16:27 crc kubenswrapper[5110]: I0126 00:16:26.813297 5110 patch_prober.go:28] interesting pod/machine-config-daemon-c6tpr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:16:27 crc kubenswrapper[5110]: I0126 00:16:26.814354 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" podUID="f15bed73-d669-439f-9828-7b952d9bfe65" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:16:29 crc kubenswrapper[5110]: I0126 00:16:29.799672 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-g8dg4" Jan 26 00:16:29 crc kubenswrapper[5110]: I0126 00:16:29.855067 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-brrzw"] Jan 26 00:16:54 crc kubenswrapper[5110]: I0126 00:16:54.891494 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-brrzw" podUID="a1b4fb8f-576d-4a52-90c3-1b6db6a4170b" containerName="registry" containerID="cri-o://e72785daf5583e885437fbffb06256814d325302c070a880645c97270d88c27d" gracePeriod=30 Jan 26 00:16:55 crc kubenswrapper[5110]: I0126 00:16:55.122758 5110 generic.go:358] "Generic (PLEG): container finished" podID="a1b4fb8f-576d-4a52-90c3-1b6db6a4170b" containerID="e72785daf5583e885437fbffb06256814d325302c070a880645c97270d88c27d" exitCode=0 Jan 26 00:16:55 crc kubenswrapper[5110]: I0126 00:16:55.122896 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-brrzw" event={"ID":"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b","Type":"ContainerDied","Data":"e72785daf5583e885437fbffb06256814d325302c070a880645c97270d88c27d"} Jan 26 00:16:55 crc kubenswrapper[5110]: I0126 00:16:55.382351 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:16:55 crc kubenswrapper[5110]: I0126 00:16:55.504017 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-ca-trust-extracted\") pod \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " Jan 26 00:16:55 crc kubenswrapper[5110]: I0126 00:16:55.504087 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkqvj\" (UniqueName: \"kubernetes.io/projected/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-kube-api-access-tkqvj\") pod \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " Jan 26 00:16:55 crc kubenswrapper[5110]: I0126 00:16:55.504123 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-registry-tls\") pod \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " Jan 26 00:16:55 crc kubenswrapper[5110]: I0126 00:16:55.504352 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-registry-certificates\") pod \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " Jan 26 00:16:55 crc kubenswrapper[5110]: I0126 00:16:55.504450 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-trusted-ca\") pod \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " Jan 26 00:16:55 crc kubenswrapper[5110]: I0126 00:16:55.504621 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-installation-pull-secrets\") pod \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " Jan 26 00:16:55 crc kubenswrapper[5110]: I0126 00:16:55.504687 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-bound-sa-token\") pod \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " Jan 26 00:16:55 crc kubenswrapper[5110]: I0126 00:16:55.504947 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\" (UID: \"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b\") " Jan 26 00:16:55 crc kubenswrapper[5110]: I0126 00:16:55.506758 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:16:55 crc kubenswrapper[5110]: I0126 00:16:55.506846 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:16:55 crc kubenswrapper[5110]: I0126 00:16:55.518193 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:16:55 crc kubenswrapper[5110]: I0126 00:16:55.518346 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:16:55 crc kubenswrapper[5110]: I0126 00:16:55.519344 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-kube-api-access-tkqvj" (OuterVolumeSpecName: "kube-api-access-tkqvj") pod "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b"). InnerVolumeSpecName "kube-api-access-tkqvj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:16:55 crc kubenswrapper[5110]: I0126 00:16:55.520291 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:16:55 crc kubenswrapper[5110]: I0126 00:16:55.527167 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 26 00:16:55 crc kubenswrapper[5110]: I0126 00:16:55.532598 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b" (UID: "a1b4fb8f-576d-4a52-90c3-1b6db6a4170b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:16:55 crc kubenswrapper[5110]: I0126 00:16:55.607076 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkqvj\" (UniqueName: \"kubernetes.io/projected/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-kube-api-access-tkqvj\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:55 crc kubenswrapper[5110]: I0126 00:16:55.607127 5110 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:55 crc kubenswrapper[5110]: I0126 00:16:55.607140 5110 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:55 crc kubenswrapper[5110]: I0126 00:16:55.607150 5110 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:55 crc kubenswrapper[5110]: I0126 00:16:55.607162 5110 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:55 crc kubenswrapper[5110]: I0126 00:16:55.607176 5110 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:55 crc kubenswrapper[5110]: I0126 00:16:55.607186 5110 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:56 crc kubenswrapper[5110]: I0126 00:16:56.134748 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-brrzw" event={"ID":"a1b4fb8f-576d-4a52-90c3-1b6db6a4170b","Type":"ContainerDied","Data":"a0a12521b814d07449d491762980d977e1fb1c84d0a0db9c58f5a8bb6c2bed82"} Jan 26 00:16:56 crc kubenswrapper[5110]: I0126 00:16:56.135408 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-brrzw" Jan 26 00:16:56 crc kubenswrapper[5110]: I0126 00:16:56.136266 5110 scope.go:117] "RemoveContainer" containerID="e72785daf5583e885437fbffb06256814d325302c070a880645c97270d88c27d" Jan 26 00:16:56 crc kubenswrapper[5110]: I0126 00:16:56.192906 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-brrzw"] Jan 26 00:16:56 crc kubenswrapper[5110]: I0126 00:16:56.197670 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-brrzw"] Jan 26 00:16:56 crc kubenswrapper[5110]: I0126 00:16:56.813642 5110 patch_prober.go:28] interesting pod/machine-config-daemon-c6tpr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:16:56 crc kubenswrapper[5110]: I0126 00:16:56.813784 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" podUID="f15bed73-d669-439f-9828-7b952d9bfe65" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:16:57 crc kubenswrapper[5110]: I0126 00:16:57.325753 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1b4fb8f-576d-4a52-90c3-1b6db6a4170b" path="/var/lib/kubelet/pods/a1b4fb8f-576d-4a52-90c3-1b6db6a4170b/volumes" Jan 26 00:17:26 crc kubenswrapper[5110]: I0126 00:17:26.813373 5110 patch_prober.go:28] interesting pod/machine-config-daemon-c6tpr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:17:26 crc kubenswrapper[5110]: I0126 00:17:26.814153 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" podUID="f15bed73-d669-439f-9828-7b952d9bfe65" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:17:26 crc kubenswrapper[5110]: I0126 00:17:26.814233 5110 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" Jan 26 00:17:26 crc kubenswrapper[5110]: I0126 00:17:26.815106 5110 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4060a0b2df1e9f152ebc91345ed0924c6529a54f677fc892640b96180c61050a"} pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 00:17:26 crc kubenswrapper[5110]: I0126 00:17:26.815183 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" podUID="f15bed73-d669-439f-9828-7b952d9bfe65" containerName="machine-config-daemon" containerID="cri-o://4060a0b2df1e9f152ebc91345ed0924c6529a54f677fc892640b96180c61050a" gracePeriod=600 Jan 26 00:17:27 crc kubenswrapper[5110]: I0126 00:17:27.352644 5110 generic.go:358] "Generic (PLEG): container finished" podID="f15bed73-d669-439f-9828-7b952d9bfe65" containerID="4060a0b2df1e9f152ebc91345ed0924c6529a54f677fc892640b96180c61050a" exitCode=0 Jan 26 00:17:27 crc kubenswrapper[5110]: I0126 00:17:27.352767 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" event={"ID":"f15bed73-d669-439f-9828-7b952d9bfe65","Type":"ContainerDied","Data":"4060a0b2df1e9f152ebc91345ed0924c6529a54f677fc892640b96180c61050a"} Jan 26 00:17:27 crc kubenswrapper[5110]: I0126 00:17:27.353673 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" event={"ID":"f15bed73-d669-439f-9828-7b952d9bfe65","Type":"ContainerStarted","Data":"8cf1a32c65b02796064cd080f35e06d7241ce6749daefe8e41aaf499a12db038"} Jan 26 00:17:27 crc kubenswrapper[5110]: I0126 00:17:27.353700 5110 scope.go:117] "RemoveContainer" containerID="8d9183336fcd7f82da30e7c47ac4c55b5870284a52f6c97a6046736bea666e8f" Jan 26 00:18:00 crc kubenswrapper[5110]: I0126 00:18:00.146892 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489778-mg6x5"] Jan 26 00:18:00 crc kubenswrapper[5110]: I0126 00:18:00.149505 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a1b4fb8f-576d-4a52-90c3-1b6db6a4170b" containerName="registry" Jan 26 00:18:00 crc kubenswrapper[5110]: I0126 00:18:00.149538 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1b4fb8f-576d-4a52-90c3-1b6db6a4170b" containerName="registry" Jan 26 00:18:00 crc kubenswrapper[5110]: I0126 00:18:00.152592 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="a1b4fb8f-576d-4a52-90c3-1b6db6a4170b" containerName="registry" Jan 26 00:18:00 crc kubenswrapper[5110]: I0126 00:18:00.160709 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489778-mg6x5" Jan 26 00:18:00 crc kubenswrapper[5110]: I0126 00:18:00.162158 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489778-mg6x5"] Jan 26 00:18:00 crc kubenswrapper[5110]: I0126 00:18:00.162951 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dfffg\"" Jan 26 00:18:00 crc kubenswrapper[5110]: I0126 00:18:00.167845 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:18:00 crc kubenswrapper[5110]: I0126 00:18:00.168827 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:18:00 crc kubenswrapper[5110]: I0126 00:18:00.291063 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flxl7\" (UniqueName: \"kubernetes.io/projected/e90be102-be86-406a-a029-8fc2e04db1c6-kube-api-access-flxl7\") pod \"auto-csr-approver-29489778-mg6x5\" (UID: \"e90be102-be86-406a-a029-8fc2e04db1c6\") " pod="openshift-infra/auto-csr-approver-29489778-mg6x5" Jan 26 00:18:00 crc kubenswrapper[5110]: I0126 00:18:00.392460 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-flxl7\" (UniqueName: \"kubernetes.io/projected/e90be102-be86-406a-a029-8fc2e04db1c6-kube-api-access-flxl7\") pod \"auto-csr-approver-29489778-mg6x5\" (UID: \"e90be102-be86-406a-a029-8fc2e04db1c6\") " pod="openshift-infra/auto-csr-approver-29489778-mg6x5" Jan 26 00:18:00 crc kubenswrapper[5110]: I0126 00:18:00.427489 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-flxl7\" (UniqueName: \"kubernetes.io/projected/e90be102-be86-406a-a029-8fc2e04db1c6-kube-api-access-flxl7\") pod \"auto-csr-approver-29489778-mg6x5\" (UID: \"e90be102-be86-406a-a029-8fc2e04db1c6\") " pod="openshift-infra/auto-csr-approver-29489778-mg6x5" Jan 26 00:18:00 crc kubenswrapper[5110]: I0126 00:18:00.487526 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489778-mg6x5" Jan 26 00:18:01 crc kubenswrapper[5110]: I0126 00:18:01.018856 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489778-mg6x5"] Jan 26 00:18:01 crc kubenswrapper[5110]: I0126 00:18:01.630824 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489778-mg6x5" event={"ID":"e90be102-be86-406a-a029-8fc2e04db1c6","Type":"ContainerStarted","Data":"83ab535982869ddf6a215b1431bb6819bb9632bb00944e8fbf525a23b3fbc0c8"} Jan 26 00:18:04 crc kubenswrapper[5110]: I0126 00:18:04.468328 5110 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-ng9zn" Jan 26 00:18:04 crc kubenswrapper[5110]: I0126 00:18:04.487612 5110 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-ng9zn" Jan 26 00:18:04 crc kubenswrapper[5110]: I0126 00:18:04.653478 5110 generic.go:358] "Generic (PLEG): container finished" podID="e90be102-be86-406a-a029-8fc2e04db1c6" containerID="3bd4b42f9906e7eed79a8077d14be83722f741eec3dff145ee96fc3296a86085" exitCode=0 Jan 26 00:18:04 crc kubenswrapper[5110]: I0126 00:18:04.653526 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489778-mg6x5" event={"ID":"e90be102-be86-406a-a029-8fc2e04db1c6","Type":"ContainerDied","Data":"3bd4b42f9906e7eed79a8077d14be83722f741eec3dff145ee96fc3296a86085"} Jan 26 00:18:05 crc kubenswrapper[5110]: I0126 00:18:05.489641 5110 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-02-25 00:13:04 +0000 UTC" deadline="2026-02-21 20:00:03.776437468 +0000 UTC" Jan 26 00:18:05 crc kubenswrapper[5110]: I0126 00:18:05.489729 5110 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="643h41m58.286714657s" Jan 26 00:18:05 crc kubenswrapper[5110]: I0126 00:18:05.879244 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489778-mg6x5" Jan 26 00:18:05 crc kubenswrapper[5110]: I0126 00:18:05.975540 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flxl7\" (UniqueName: \"kubernetes.io/projected/e90be102-be86-406a-a029-8fc2e04db1c6-kube-api-access-flxl7\") pod \"e90be102-be86-406a-a029-8fc2e04db1c6\" (UID: \"e90be102-be86-406a-a029-8fc2e04db1c6\") " Jan 26 00:18:05 crc kubenswrapper[5110]: I0126 00:18:05.984127 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e90be102-be86-406a-a029-8fc2e04db1c6-kube-api-access-flxl7" (OuterVolumeSpecName: "kube-api-access-flxl7") pod "e90be102-be86-406a-a029-8fc2e04db1c6" (UID: "e90be102-be86-406a-a029-8fc2e04db1c6"). InnerVolumeSpecName "kube-api-access-flxl7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:18:06 crc kubenswrapper[5110]: I0126 00:18:06.078058 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-flxl7\" (UniqueName: \"kubernetes.io/projected/e90be102-be86-406a-a029-8fc2e04db1c6-kube-api-access-flxl7\") on node \"crc\" DevicePath \"\"" Jan 26 00:18:06 crc kubenswrapper[5110]: I0126 00:18:06.569691 5110 scope.go:117] "RemoveContainer" containerID="a78fa1842b1c8090455602511267f4ae213fc67d96d93cf3c41e2dc83010e27d" Jan 26 00:18:06 crc kubenswrapper[5110]: I0126 00:18:06.598300 5110 scope.go:117] "RemoveContainer" containerID="a1d2ffe9aa28d184a3128e2530810a13ca2ed60200a125c00979d325cc96f2c3" Jan 26 00:18:06 crc kubenswrapper[5110]: I0126 00:18:06.669272 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489778-mg6x5" Jan 26 00:18:06 crc kubenswrapper[5110]: I0126 00:18:06.669279 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489778-mg6x5" event={"ID":"e90be102-be86-406a-a029-8fc2e04db1c6","Type":"ContainerDied","Data":"83ab535982869ddf6a215b1431bb6819bb9632bb00944e8fbf525a23b3fbc0c8"} Jan 26 00:18:06 crc kubenswrapper[5110]: I0126 00:18:06.669349 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83ab535982869ddf6a215b1431bb6819bb9632bb00944e8fbf525a23b3fbc0c8" Jan 26 00:19:03 crc kubenswrapper[5110]: I0126 00:19:03.558897 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 26 00:19:03 crc kubenswrapper[5110]: I0126 00:19:03.560348 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 26 00:19:06 crc kubenswrapper[5110]: I0126 00:19:06.662454 5110 scope.go:117] "RemoveContainer" containerID="c66631d19234ba9bc7db616a1c64ff0a80d4daf5c65e2d4506c3d0d298c6265d" Jan 26 00:19:06 crc kubenswrapper[5110]: I0126 00:19:06.692865 5110 scope.go:117] "RemoveContainer" containerID="d8e34cf5197b0f48ee7f9f1e932ec3fd5ee08868015057e5ae93e9639cda6134" Jan 26 00:19:56 crc kubenswrapper[5110]: I0126 00:19:56.813395 5110 patch_prober.go:28] interesting pod/machine-config-daemon-c6tpr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:19:56 crc kubenswrapper[5110]: I0126 00:19:56.814523 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" podUID="f15bed73-d669-439f-9828-7b952d9bfe65" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:20:00 crc kubenswrapper[5110]: I0126 00:20:00.141977 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489780-n2kl5"] Jan 26 00:20:00 crc kubenswrapper[5110]: I0126 00:20:00.142499 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e90be102-be86-406a-a029-8fc2e04db1c6" containerName="oc" Jan 26 00:20:00 crc kubenswrapper[5110]: I0126 00:20:00.142512 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="e90be102-be86-406a-a029-8fc2e04db1c6" containerName="oc" Jan 26 00:20:00 crc kubenswrapper[5110]: I0126 00:20:00.142611 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="e90be102-be86-406a-a029-8fc2e04db1c6" containerName="oc" Jan 26 00:20:00 crc kubenswrapper[5110]: I0126 00:20:00.150896 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489780-n2kl5" Jan 26 00:20:00 crc kubenswrapper[5110]: I0126 00:20:00.155328 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:20:00 crc kubenswrapper[5110]: I0126 00:20:00.157563 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:20:00 crc kubenswrapper[5110]: I0126 00:20:00.158858 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dfffg\"" Jan 26 00:20:00 crc kubenswrapper[5110]: I0126 00:20:00.163497 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489780-n2kl5"] Jan 26 00:20:00 crc kubenswrapper[5110]: I0126 00:20:00.282604 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxnlx\" (UniqueName: \"kubernetes.io/projected/616e145b-2b1b-40bf-94a6-a54da571e102-kube-api-access-fxnlx\") pod \"auto-csr-approver-29489780-n2kl5\" (UID: \"616e145b-2b1b-40bf-94a6-a54da571e102\") " pod="openshift-infra/auto-csr-approver-29489780-n2kl5" Jan 26 00:20:00 crc kubenswrapper[5110]: I0126 00:20:00.384769 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fxnlx\" (UniqueName: \"kubernetes.io/projected/616e145b-2b1b-40bf-94a6-a54da571e102-kube-api-access-fxnlx\") pod \"auto-csr-approver-29489780-n2kl5\" (UID: \"616e145b-2b1b-40bf-94a6-a54da571e102\") " pod="openshift-infra/auto-csr-approver-29489780-n2kl5" Jan 26 00:20:00 crc kubenswrapper[5110]: I0126 00:20:00.411619 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxnlx\" (UniqueName: \"kubernetes.io/projected/616e145b-2b1b-40bf-94a6-a54da571e102-kube-api-access-fxnlx\") pod \"auto-csr-approver-29489780-n2kl5\" (UID: \"616e145b-2b1b-40bf-94a6-a54da571e102\") " pod="openshift-infra/auto-csr-approver-29489780-n2kl5" Jan 26 00:20:00 crc kubenswrapper[5110]: I0126 00:20:00.475854 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489780-n2kl5" Jan 26 00:20:00 crc kubenswrapper[5110]: I0126 00:20:00.767016 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489780-n2kl5"] Jan 26 00:20:00 crc kubenswrapper[5110]: I0126 00:20:00.777747 5110 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 00:20:01 crc kubenswrapper[5110]: I0126 00:20:01.530120 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489780-n2kl5" event={"ID":"616e145b-2b1b-40bf-94a6-a54da571e102","Type":"ContainerStarted","Data":"1df88f8c961574fa63c59c89c063b3f6f2128ba6648c6b316371458bc49e34bc"} Jan 26 00:20:02 crc kubenswrapper[5110]: I0126 00:20:02.538244 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489780-n2kl5" event={"ID":"616e145b-2b1b-40bf-94a6-a54da571e102","Type":"ContainerStarted","Data":"fcd41525cdd600223dfb1d80d7f6df5a8805e634339d46bccde54adf573ee530"} Jan 26 00:20:02 crc kubenswrapper[5110]: I0126 00:20:02.566862 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29489780-n2kl5" podStartSLOduration=1.190624073 podStartE2EDuration="2.566828736s" podCreationTimestamp="2026-01-26 00:20:00 +0000 UTC" firstStartedPulling="2026-01-26 00:20:00.778021199 +0000 UTC m=+658.006919818" lastFinishedPulling="2026-01-26 00:20:02.154225812 +0000 UTC m=+659.383124481" observedRunningTime="2026-01-26 00:20:02.556502562 +0000 UTC m=+659.785401181" watchObservedRunningTime="2026-01-26 00:20:02.566828736 +0000 UTC m=+659.795727355" Jan 26 00:20:03 crc kubenswrapper[5110]: I0126 00:20:03.549635 5110 generic.go:358] "Generic (PLEG): container finished" podID="616e145b-2b1b-40bf-94a6-a54da571e102" containerID="fcd41525cdd600223dfb1d80d7f6df5a8805e634339d46bccde54adf573ee530" exitCode=0 Jan 26 00:20:03 crc kubenswrapper[5110]: I0126 00:20:03.549915 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489780-n2kl5" event={"ID":"616e145b-2b1b-40bf-94a6-a54da571e102","Type":"ContainerDied","Data":"fcd41525cdd600223dfb1d80d7f6df5a8805e634339d46bccde54adf573ee530"} Jan 26 00:20:04 crc kubenswrapper[5110]: I0126 00:20:04.771442 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489780-n2kl5" Jan 26 00:20:04 crc kubenswrapper[5110]: I0126 00:20:04.858386 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxnlx\" (UniqueName: \"kubernetes.io/projected/616e145b-2b1b-40bf-94a6-a54da571e102-kube-api-access-fxnlx\") pod \"616e145b-2b1b-40bf-94a6-a54da571e102\" (UID: \"616e145b-2b1b-40bf-94a6-a54da571e102\") " Jan 26 00:20:04 crc kubenswrapper[5110]: I0126 00:20:04.865409 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/616e145b-2b1b-40bf-94a6-a54da571e102-kube-api-access-fxnlx" (OuterVolumeSpecName: "kube-api-access-fxnlx") pod "616e145b-2b1b-40bf-94a6-a54da571e102" (UID: "616e145b-2b1b-40bf-94a6-a54da571e102"). InnerVolumeSpecName "kube-api-access-fxnlx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:20:04 crc kubenswrapper[5110]: I0126 00:20:04.959937 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fxnlx\" (UniqueName: \"kubernetes.io/projected/616e145b-2b1b-40bf-94a6-a54da571e102-kube-api-access-fxnlx\") on node \"crc\" DevicePath \"\"" Jan 26 00:20:05 crc kubenswrapper[5110]: I0126 00:20:05.566441 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489780-n2kl5" event={"ID":"616e145b-2b1b-40bf-94a6-a54da571e102","Type":"ContainerDied","Data":"1df88f8c961574fa63c59c89c063b3f6f2128ba6648c6b316371458bc49e34bc"} Jan 26 00:20:05 crc kubenswrapper[5110]: I0126 00:20:05.566514 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1df88f8c961574fa63c59c89c063b3f6f2128ba6648c6b316371458bc49e34bc" Jan 26 00:20:05 crc kubenswrapper[5110]: I0126 00:20:05.566534 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489780-n2kl5" Jan 26 00:20:26 crc kubenswrapper[5110]: I0126 00:20:26.812931 5110 patch_prober.go:28] interesting pod/machine-config-daemon-c6tpr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:20:26 crc kubenswrapper[5110]: I0126 00:20:26.813514 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" podUID="f15bed73-d669-439f-9828-7b952d9bfe65" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.642115 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-qgzzt"] Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.643319 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-qgzzt" podUID="f11dce8d-124f-497f-96a2-11dd1dddd26d" containerName="kube-rbac-proxy" containerID="cri-o://4194f5c9379f05c335d58c418cfd83b43644db004cb9b4c24d7d124312a94844" gracePeriod=30 Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.643770 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-qgzzt" podUID="f11dce8d-124f-497f-96a2-11dd1dddd26d" containerName="ovnkube-cluster-manager" containerID="cri-o://ebe5a5f53b62a812a3030d77b983eb270e4cd2147fb0a3329f9013f86244cffe" gracePeriod=30 Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.840241 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-qgzzt" Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.849985 5110 generic.go:358] "Generic (PLEG): container finished" podID="f11dce8d-124f-497f-96a2-11dd1dddd26d" containerID="ebe5a5f53b62a812a3030d77b983eb270e4cd2147fb0a3329f9013f86244cffe" exitCode=0 Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.850011 5110 generic.go:358] "Generic (PLEG): container finished" podID="f11dce8d-124f-497f-96a2-11dd1dddd26d" containerID="4194f5c9379f05c335d58c418cfd83b43644db004cb9b4c24d7d124312a94844" exitCode=0 Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.850098 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-qgzzt" event={"ID":"f11dce8d-124f-497f-96a2-11dd1dddd26d","Type":"ContainerDied","Data":"ebe5a5f53b62a812a3030d77b983eb270e4cd2147fb0a3329f9013f86244cffe"} Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.850132 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-qgzzt" event={"ID":"f11dce8d-124f-497f-96a2-11dd1dddd26d","Type":"ContainerDied","Data":"4194f5c9379f05c335d58c418cfd83b43644db004cb9b4c24d7d124312a94844"} Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.850146 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-qgzzt" event={"ID":"f11dce8d-124f-497f-96a2-11dd1dddd26d","Type":"ContainerDied","Data":"b828bb804ab6cf9a77bb51fb75c6134d1b6590e4afa009431ee2df876fd9e62c"} Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.850165 5110 scope.go:117] "RemoveContainer" containerID="ebe5a5f53b62a812a3030d77b983eb270e4cd2147fb0a3329f9013f86244cffe" Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.850329 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-qgzzt" Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.870144 5110 scope.go:117] "RemoveContainer" containerID="4194f5c9379f05c335d58c418cfd83b43644db004cb9b4c24d7d124312a94844" Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.881985 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-td7cm"] Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.882959 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="616e145b-2b1b-40bf-94a6-a54da571e102" containerName="oc" Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.882987 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="616e145b-2b1b-40bf-94a6-a54da571e102" containerName="oc" Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.883009 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f11dce8d-124f-497f-96a2-11dd1dddd26d" containerName="ovnkube-cluster-manager" Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.883018 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="f11dce8d-124f-497f-96a2-11dd1dddd26d" containerName="ovnkube-cluster-manager" Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.883039 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f11dce8d-124f-497f-96a2-11dd1dddd26d" containerName="kube-rbac-proxy" Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.883048 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="f11dce8d-124f-497f-96a2-11dd1dddd26d" containerName="kube-rbac-proxy" Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.883169 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="f11dce8d-124f-497f-96a2-11dd1dddd26d" containerName="kube-rbac-proxy" Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.883185 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="f11dce8d-124f-497f-96a2-11dd1dddd26d" containerName="ovnkube-cluster-manager" Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.883200 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="616e145b-2b1b-40bf-94a6-a54da571e102" containerName="oc" Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.887345 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bnkth"] Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.887925 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" podUID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerName="ovn-controller" containerID="cri-o://fca30efd8868709834f692b62596ed5648db9292a5e2f2157a4b81cbf816fa1b" gracePeriod=30 Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.887971 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" podUID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerName="nbdb" containerID="cri-o://181993f3f162482e955f9cb8eac78e2a3118d3a929a19cf4b8280ea79af4af26" gracePeriod=30 Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.888035 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" podUID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerName="kube-rbac-proxy-node" containerID="cri-o://c84b640339be8514edb04b876af2ccb55d2474165d11ef323d55545ef3797218" gracePeriod=30 Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.888028 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" podUID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://f8888c50ea7c1d1cfd30dba1874ce8fabdfe27b7ffcddb77dbf0caad4fb053d0" gracePeriod=30 Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.888056 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" podUID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerName="ovn-acl-logging" containerID="cri-o://09bb5b8ad1cf0163b24ae49c3066899e402857a59abfbd22df10acfb93b7bf8e" gracePeriod=30 Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.888298 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-td7cm" Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.888345 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" podUID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerName="northd" containerID="cri-o://acd9792db6cfdc68e20fa60da9d63fe0c78224b6bcbfc82ff654a96ac3280eb9" gracePeriod=30 Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.888342 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" podUID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerName="sbdb" containerID="cri-o://828271507f6fc03318ff392209235e18427f8b53ccd91dc2bcc360db0f0827e8" gracePeriod=30 Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.911942 5110 scope.go:117] "RemoveContainer" containerID="ebe5a5f53b62a812a3030d77b983eb270e4cd2147fb0a3329f9013f86244cffe" Jan 26 00:20:47 crc kubenswrapper[5110]: E0126 00:20:47.915874 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebe5a5f53b62a812a3030d77b983eb270e4cd2147fb0a3329f9013f86244cffe\": container with ID starting with ebe5a5f53b62a812a3030d77b983eb270e4cd2147fb0a3329f9013f86244cffe not found: ID does not exist" containerID="ebe5a5f53b62a812a3030d77b983eb270e4cd2147fb0a3329f9013f86244cffe" Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.915920 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebe5a5f53b62a812a3030d77b983eb270e4cd2147fb0a3329f9013f86244cffe"} err="failed to get container status \"ebe5a5f53b62a812a3030d77b983eb270e4cd2147fb0a3329f9013f86244cffe\": rpc error: code = NotFound desc = could not find container \"ebe5a5f53b62a812a3030d77b983eb270e4cd2147fb0a3329f9013f86244cffe\": container with ID starting with ebe5a5f53b62a812a3030d77b983eb270e4cd2147fb0a3329f9013f86244cffe not found: ID does not exist" Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.915950 5110 scope.go:117] "RemoveContainer" containerID="4194f5c9379f05c335d58c418cfd83b43644db004cb9b4c24d7d124312a94844" Jan 26 00:20:47 crc kubenswrapper[5110]: E0126 00:20:47.918390 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4194f5c9379f05c335d58c418cfd83b43644db004cb9b4c24d7d124312a94844\": container with ID starting with 4194f5c9379f05c335d58c418cfd83b43644db004cb9b4c24d7d124312a94844 not found: ID does not exist" containerID="4194f5c9379f05c335d58c418cfd83b43644db004cb9b4c24d7d124312a94844" Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.918423 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4194f5c9379f05c335d58c418cfd83b43644db004cb9b4c24d7d124312a94844"} err="failed to get container status \"4194f5c9379f05c335d58c418cfd83b43644db004cb9b4c24d7d124312a94844\": rpc error: code = NotFound desc = could not find container \"4194f5c9379f05c335d58c418cfd83b43644db004cb9b4c24d7d124312a94844\": container with ID starting with 4194f5c9379f05c335d58c418cfd83b43644db004cb9b4c24d7d124312a94844 not found: ID does not exist" Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.918441 5110 scope.go:117] "RemoveContainer" containerID="ebe5a5f53b62a812a3030d77b983eb270e4cd2147fb0a3329f9013f86244cffe" Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.918653 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" podUID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerName="ovnkube-controller" containerID="cri-o://ec80b085afc1e9025e262f77c6de6c00c3a16f42f250a71784befbdaa9ec0cab" gracePeriod=30 Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.921813 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebe5a5f53b62a812a3030d77b983eb270e4cd2147fb0a3329f9013f86244cffe"} err="failed to get container status \"ebe5a5f53b62a812a3030d77b983eb270e4cd2147fb0a3329f9013f86244cffe\": rpc error: code = NotFound desc = could not find container \"ebe5a5f53b62a812a3030d77b983eb270e4cd2147fb0a3329f9013f86244cffe\": container with ID starting with ebe5a5f53b62a812a3030d77b983eb270e4cd2147fb0a3329f9013f86244cffe not found: ID does not exist" Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.921844 5110 scope.go:117] "RemoveContainer" containerID="4194f5c9379f05c335d58c418cfd83b43644db004cb9b4c24d7d124312a94844" Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.922284 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4194f5c9379f05c335d58c418cfd83b43644db004cb9b4c24d7d124312a94844"} err="failed to get container status \"4194f5c9379f05c335d58c418cfd83b43644db004cb9b4c24d7d124312a94844\": rpc error: code = NotFound desc = could not find container \"4194f5c9379f05c335d58c418cfd83b43644db004cb9b4c24d7d124312a94844\": container with ID starting with 4194f5c9379f05c335d58c418cfd83b43644db004cb9b4c24d7d124312a94844 not found: ID does not exist" Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.990370 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f11dce8d-124f-497f-96a2-11dd1dddd26d-ovn-control-plane-metrics-cert\") pod \"f11dce8d-124f-497f-96a2-11dd1dddd26d\" (UID: \"f11dce8d-124f-497f-96a2-11dd1dddd26d\") " Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.990654 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f11dce8d-124f-497f-96a2-11dd1dddd26d-ovnkube-config\") pod \"f11dce8d-124f-497f-96a2-11dd1dddd26d\" (UID: \"f11dce8d-124f-497f-96a2-11dd1dddd26d\") " Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.990852 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f11dce8d-124f-497f-96a2-11dd1dddd26d-env-overrides\") pod \"f11dce8d-124f-497f-96a2-11dd1dddd26d\" (UID: \"f11dce8d-124f-497f-96a2-11dd1dddd26d\") " Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.990974 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mlqb\" (UniqueName: \"kubernetes.io/projected/f11dce8d-124f-497f-96a2-11dd1dddd26d-kube-api-access-4mlqb\") pod \"f11dce8d-124f-497f-96a2-11dd1dddd26d\" (UID: \"f11dce8d-124f-497f-96a2-11dd1dddd26d\") " Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.991167 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fbb883e9-cfc0-4055-8dcc-ce7d2550b41b-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-td7cm\" (UID: \"fbb883e9-cfc0-4055-8dcc-ce7d2550b41b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-td7cm" Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.991278 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tsn8\" (UniqueName: \"kubernetes.io/projected/fbb883e9-cfc0-4055-8dcc-ce7d2550b41b-kube-api-access-4tsn8\") pod \"ovnkube-control-plane-97c9b6c48-td7cm\" (UID: \"fbb883e9-cfc0-4055-8dcc-ce7d2550b41b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-td7cm" Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.991355 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fbb883e9-cfc0-4055-8dcc-ce7d2550b41b-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-td7cm\" (UID: \"fbb883e9-cfc0-4055-8dcc-ce7d2550b41b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-td7cm" Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.991423 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fbb883e9-cfc0-4055-8dcc-ce7d2550b41b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-td7cm\" (UID: \"fbb883e9-cfc0-4055-8dcc-ce7d2550b41b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-td7cm" Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.992047 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f11dce8d-124f-497f-96a2-11dd1dddd26d-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "f11dce8d-124f-497f-96a2-11dd1dddd26d" (UID: "f11dce8d-124f-497f-96a2-11dd1dddd26d"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.992367 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f11dce8d-124f-497f-96a2-11dd1dddd26d-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "f11dce8d-124f-497f-96a2-11dd1dddd26d" (UID: "f11dce8d-124f-497f-96a2-11dd1dddd26d"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:20:47 crc kubenswrapper[5110]: I0126 00:20:47.998697 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f11dce8d-124f-497f-96a2-11dd1dddd26d-kube-api-access-4mlqb" (OuterVolumeSpecName: "kube-api-access-4mlqb") pod "f11dce8d-124f-497f-96a2-11dd1dddd26d" (UID: "f11dce8d-124f-497f-96a2-11dd1dddd26d"). InnerVolumeSpecName "kube-api-access-4mlqb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:47.999977 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f11dce8d-124f-497f-96a2-11dd1dddd26d-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "f11dce8d-124f-497f-96a2-11dd1dddd26d" (UID: "f11dce8d-124f-497f-96a2-11dd1dddd26d"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.093820 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fbb883e9-cfc0-4055-8dcc-ce7d2550b41b-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-td7cm\" (UID: \"fbb883e9-cfc0-4055-8dcc-ce7d2550b41b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-td7cm" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.094160 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4tsn8\" (UniqueName: \"kubernetes.io/projected/fbb883e9-cfc0-4055-8dcc-ce7d2550b41b-kube-api-access-4tsn8\") pod \"ovnkube-control-plane-97c9b6c48-td7cm\" (UID: \"fbb883e9-cfc0-4055-8dcc-ce7d2550b41b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-td7cm" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.094253 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fbb883e9-cfc0-4055-8dcc-ce7d2550b41b-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-td7cm\" (UID: \"fbb883e9-cfc0-4055-8dcc-ce7d2550b41b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-td7cm" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.094291 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fbb883e9-cfc0-4055-8dcc-ce7d2550b41b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-td7cm\" (UID: \"fbb883e9-cfc0-4055-8dcc-ce7d2550b41b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-td7cm" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.094404 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4mlqb\" (UniqueName: \"kubernetes.io/projected/f11dce8d-124f-497f-96a2-11dd1dddd26d-kube-api-access-4mlqb\") on node \"crc\" DevicePath \"\"" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.094423 5110 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f11dce8d-124f-497f-96a2-11dd1dddd26d-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.094434 5110 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f11dce8d-124f-497f-96a2-11dd1dddd26d-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.094442 5110 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f11dce8d-124f-497f-96a2-11dd1dddd26d-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.094730 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fbb883e9-cfc0-4055-8dcc-ce7d2550b41b-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-td7cm\" (UID: \"fbb883e9-cfc0-4055-8dcc-ce7d2550b41b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-td7cm" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.094789 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fbb883e9-cfc0-4055-8dcc-ce7d2550b41b-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-td7cm\" (UID: \"fbb883e9-cfc0-4055-8dcc-ce7d2550b41b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-td7cm" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.100563 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fbb883e9-cfc0-4055-8dcc-ce7d2550b41b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-td7cm\" (UID: \"fbb883e9-cfc0-4055-8dcc-ce7d2550b41b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-td7cm" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.112263 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tsn8\" (UniqueName: \"kubernetes.io/projected/fbb883e9-cfc0-4055-8dcc-ce7d2550b41b-kube-api-access-4tsn8\") pod \"ovnkube-control-plane-97c9b6c48-td7cm\" (UID: \"fbb883e9-cfc0-4055-8dcc-ce7d2550b41b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-td7cm" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.190357 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bnkth_c2cba3eb-9a27-49a0-a3e6-645a8853c027/ovn-acl-logging/0.log" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.190842 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bnkth_c2cba3eb-9a27-49a0-a3e6-645a8853c027/ovn-controller/0.log" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.191361 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.218534 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-qgzzt"] Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.218580 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-td7cm" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.228058 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-qgzzt"] Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.260869 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xdrtz"] Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.261698 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerName="nbdb" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.261729 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerName="nbdb" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.261762 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerName="sbdb" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.261771 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerName="sbdb" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.261788 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.261816 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.261827 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerName="northd" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.261836 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerName="northd" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.261847 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerName="kube-rbac-proxy-node" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.261855 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerName="kube-rbac-proxy-node" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.261865 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerName="ovnkube-controller" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.261873 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerName="ovnkube-controller" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.261892 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerName="ovn-acl-logging" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.261899 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerName="ovn-acl-logging" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.261912 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerName="ovn-controller" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.261919 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerName="ovn-controller" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.261931 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerName="kubecfg-setup" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.261939 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerName="kubecfg-setup" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.262045 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerName="northd" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.262064 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerName="kube-rbac-proxy-node" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.262076 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerName="ovnkube-controller" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.262091 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerName="ovn-controller" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.262110 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerName="nbdb" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.262128 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerName="ovn-acl-logging" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.262147 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerName="sbdb" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.262158 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.269593 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.296874 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kp4q\" (UniqueName: \"kubernetes.io/projected/c2cba3eb-9a27-49a0-a3e6-645a8853c027-kube-api-access-6kp4q\") pod \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.296973 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-run-systemd\") pod \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.297003 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-var-lib-cni-networks-ovn-kubernetes\") pod \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.297031 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c2cba3eb-9a27-49a0-a3e6-645a8853c027-ovnkube-config\") pod \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.297071 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c2cba3eb-9a27-49a0-a3e6-645a8853c027-ovn-node-metrics-cert\") pod \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.297104 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-run-ovn\") pod \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.297125 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-run-netns\") pod \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.297189 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-var-lib-openvswitch\") pod \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.297269 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-cni-netd\") pod \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.297303 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-run-openvswitch\") pod \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.297325 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-slash\") pod \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.297350 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-cni-bin\") pod \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.297381 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-run-ovn-kubernetes\") pod \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.297454 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-systemd-units\") pod \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.297593 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-log-socket\") pod \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.297648 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c2cba3eb-9a27-49a0-a3e6-645a8853c027-env-overrides\") pod \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.297696 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c2cba3eb-9a27-49a0-a3e6-645a8853c027-ovnkube-script-lib\") pod \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.297670 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "c2cba3eb-9a27-49a0-a3e6-645a8853c027" (UID: "c2cba3eb-9a27-49a0-a3e6-645a8853c027"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.297728 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-kubelet\") pod \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.297993 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-etc-openvswitch\") pod \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.298051 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-node-log\") pod \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\" (UID: \"c2cba3eb-9a27-49a0-a3e6-645a8853c027\") " Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.298095 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "c2cba3eb-9a27-49a0-a3e6-645a8853c027" (UID: "c2cba3eb-9a27-49a0-a3e6-645a8853c027"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.298154 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "c2cba3eb-9a27-49a0-a3e6-645a8853c027" (UID: "c2cba3eb-9a27-49a0-a3e6-645a8853c027"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.298198 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "c2cba3eb-9a27-49a0-a3e6-645a8853c027" (UID: "c2cba3eb-9a27-49a0-a3e6-645a8853c027"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.298203 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "c2cba3eb-9a27-49a0-a3e6-645a8853c027" (UID: "c2cba3eb-9a27-49a0-a3e6-645a8853c027"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.298225 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-log-socket" (OuterVolumeSpecName: "log-socket") pod "c2cba3eb-9a27-49a0-a3e6-645a8853c027" (UID: "c2cba3eb-9a27-49a0-a3e6-645a8853c027"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.298220 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "c2cba3eb-9a27-49a0-a3e6-645a8853c027" (UID: "c2cba3eb-9a27-49a0-a3e6-645a8853c027"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.298245 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "c2cba3eb-9a27-49a0-a3e6-645a8853c027" (UID: "c2cba3eb-9a27-49a0-a3e6-645a8853c027"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.298282 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "c2cba3eb-9a27-49a0-a3e6-645a8853c027" (UID: "c2cba3eb-9a27-49a0-a3e6-645a8853c027"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.298244 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "c2cba3eb-9a27-49a0-a3e6-645a8853c027" (UID: "c2cba3eb-9a27-49a0-a3e6-645a8853c027"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.298327 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-slash" (OuterVolumeSpecName: "host-slash") pod "c2cba3eb-9a27-49a0-a3e6-645a8853c027" (UID: "c2cba3eb-9a27-49a0-a3e6-645a8853c027"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.298347 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "c2cba3eb-9a27-49a0-a3e6-645a8853c027" (UID: "c2cba3eb-9a27-49a0-a3e6-645a8853c027"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.298377 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2cba3eb-9a27-49a0-a3e6-645a8853c027-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "c2cba3eb-9a27-49a0-a3e6-645a8853c027" (UID: "c2cba3eb-9a27-49a0-a3e6-645a8853c027"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.298371 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "c2cba3eb-9a27-49a0-a3e6-645a8853c027" (UID: "c2cba3eb-9a27-49a0-a3e6-645a8853c027"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.298452 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-node-log" (OuterVolumeSpecName: "node-log") pod "c2cba3eb-9a27-49a0-a3e6-645a8853c027" (UID: "c2cba3eb-9a27-49a0-a3e6-645a8853c027"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.298812 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2cba3eb-9a27-49a0-a3e6-645a8853c027-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "c2cba3eb-9a27-49a0-a3e6-645a8853c027" (UID: "c2cba3eb-9a27-49a0-a3e6-645a8853c027"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.298831 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2cba3eb-9a27-49a0-a3e6-645a8853c027-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "c2cba3eb-9a27-49a0-a3e6-645a8853c027" (UID: "c2cba3eb-9a27-49a0-a3e6-645a8853c027"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.299346 5110 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.299412 5110 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c2cba3eb-9a27-49a0-a3e6-645a8853c027-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.299432 5110 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.299443 5110 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.299457 5110 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.299471 5110 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.299486 5110 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.299497 5110 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-slash\") on node \"crc\" DevicePath \"\"" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.299508 5110 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.299595 5110 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.299610 5110 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.299622 5110 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-log-socket\") on node \"crc\" DevicePath \"\"" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.299633 5110 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c2cba3eb-9a27-49a0-a3e6-645a8853c027-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.299641 5110 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c2cba3eb-9a27-49a0-a3e6-645a8853c027-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.299652 5110 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.299664 5110 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.299676 5110 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-node-log\") on node \"crc\" DevicePath \"\"" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.303143 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2cba3eb-9a27-49a0-a3e6-645a8853c027-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "c2cba3eb-9a27-49a0-a3e6-645a8853c027" (UID: "c2cba3eb-9a27-49a0-a3e6-645a8853c027"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.303233 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2cba3eb-9a27-49a0-a3e6-645a8853c027-kube-api-access-6kp4q" (OuterVolumeSpecName: "kube-api-access-6kp4q") pod "c2cba3eb-9a27-49a0-a3e6-645a8853c027" (UID: "c2cba3eb-9a27-49a0-a3e6-645a8853c027"). InnerVolumeSpecName "kube-api-access-6kp4q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.321492 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "c2cba3eb-9a27-49a0-a3e6-645a8853c027" (UID: "c2cba3eb-9a27-49a0-a3e6-645a8853c027"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.401507 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-host-kubelet\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.401558 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-var-lib-openvswitch\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.401591 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-host-cni-bin\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.401643 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-host-cni-netd\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.401666 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-ovnkube-config\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.401698 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-etc-openvswitch\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.401718 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-run-ovn\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.401745 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-run-openvswitch\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.401807 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-systemd-units\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.401836 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-host-slash\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.401869 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-ovn-node-metrics-cert\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.401895 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-node-log\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.401918 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-log-socket\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.401971 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-host-run-ovn-kubernetes\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.401994 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-host-run-netns\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.402041 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfgmj\" (UniqueName: \"kubernetes.io/projected/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-kube-api-access-tfgmj\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.402064 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-run-systemd\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.402086 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-ovnkube-script-lib\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.402111 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-env-overrides\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.402135 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.402177 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6kp4q\" (UniqueName: \"kubernetes.io/projected/c2cba3eb-9a27-49a0-a3e6-645a8853c027-kube-api-access-6kp4q\") on node \"crc\" DevicePath \"\"" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.402191 5110 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c2cba3eb-9a27-49a0-a3e6-645a8853c027-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.402203 5110 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c2cba3eb-9a27-49a0-a3e6-645a8853c027-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.502921 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-run-openvswitch\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.503132 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-run-openvswitch\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.503176 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-systemd-units\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.503279 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-systemd-units\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.503304 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-host-slash\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.503334 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-host-slash\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.503387 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-ovn-node-metrics-cert\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.503432 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-node-log\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.503475 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-log-socket\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.503571 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-host-run-ovn-kubernetes\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.503610 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-node-log\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.503619 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-host-run-netns\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.503675 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-host-run-ovn-kubernetes\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.503680 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-log-socket\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.503647 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-host-run-netns\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.503714 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tfgmj\" (UniqueName: \"kubernetes.io/projected/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-kube-api-access-tfgmj\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.503771 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-run-systemd\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.503839 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-run-systemd\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.503883 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-ovnkube-script-lib\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.503915 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-env-overrides\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.503959 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.503994 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-host-kubelet\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.504015 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-var-lib-openvswitch\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.504052 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-host-cni-bin\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.504125 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-host-cni-netd\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.504141 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-host-kubelet\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.504149 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-ovnkube-config\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.504185 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-var-lib-openvswitch\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.504239 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.504242 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-etc-openvswitch\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.504269 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-etc-openvswitch\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.504272 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-host-cni-bin\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.504299 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-run-ovn\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.504308 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-host-cni-netd\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.504329 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-run-ovn\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.505419 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-ovnkube-config\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.505426 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-ovnkube-script-lib\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.505719 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-env-overrides\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.508400 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-ovn-node-metrics-cert\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.523049 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfgmj\" (UniqueName: \"kubernetes.io/projected/618a4b44-a8e1-4fec-bb7c-83ed2e1bf949-kube-api-access-tfgmj\") pod \"ovnkube-node-xdrtz\" (UID: \"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949\") " pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.593115 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:48 crc kubenswrapper[5110]: W0126 00:20:48.612284 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod618a4b44_a8e1_4fec_bb7c_83ed2e1bf949.slice/crio-35d5587d7e6577921f6ff37ccb03972f1c50d3c9d2c8a8234d5ba90c7c8a105a WatchSource:0}: Error finding container 35d5587d7e6577921f6ff37ccb03972f1c50d3c9d2c8a8234d5ba90c7c8a105a: Status 404 returned error can't find the container with id 35d5587d7e6577921f6ff37ccb03972f1c50d3c9d2c8a8234d5ba90c7c8a105a Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.867129 5110 generic.go:358] "Generic (PLEG): container finished" podID="618a4b44-a8e1-4fec-bb7c-83ed2e1bf949" containerID="9ae1896a49ae453c5b83e6a2129e55666e229af18233f1939c5ed9120c193fc1" exitCode=0 Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.867285 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" event={"ID":"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949","Type":"ContainerDied","Data":"9ae1896a49ae453c5b83e6a2129e55666e229af18233f1939c5ed9120c193fc1"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.867360 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" event={"ID":"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949","Type":"ContainerStarted","Data":"35d5587d7e6577921f6ff37ccb03972f1c50d3c9d2c8a8234d5ba90c7c8a105a"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.877588 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bnkth_c2cba3eb-9a27-49a0-a3e6-645a8853c027/ovn-acl-logging/0.log" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.878298 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bnkth_c2cba3eb-9a27-49a0-a3e6-645a8853c027/ovn-controller/0.log" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.878707 5110 generic.go:358] "Generic (PLEG): container finished" podID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerID="ec80b085afc1e9025e262f77c6de6c00c3a16f42f250a71784befbdaa9ec0cab" exitCode=0 Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.878736 5110 generic.go:358] "Generic (PLEG): container finished" podID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerID="828271507f6fc03318ff392209235e18427f8b53ccd91dc2bcc360db0f0827e8" exitCode=0 Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.878745 5110 generic.go:358] "Generic (PLEG): container finished" podID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerID="181993f3f162482e955f9cb8eac78e2a3118d3a929a19cf4b8280ea79af4af26" exitCode=0 Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.878754 5110 generic.go:358] "Generic (PLEG): container finished" podID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerID="acd9792db6cfdc68e20fa60da9d63fe0c78224b6bcbfc82ff654a96ac3280eb9" exitCode=0 Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.878765 5110 generic.go:358] "Generic (PLEG): container finished" podID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerID="f8888c50ea7c1d1cfd30dba1874ce8fabdfe27b7ffcddb77dbf0caad4fb053d0" exitCode=0 Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.878774 5110 generic.go:358] "Generic (PLEG): container finished" podID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerID="c84b640339be8514edb04b876af2ccb55d2474165d11ef323d55545ef3797218" exitCode=0 Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.878749 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" event={"ID":"c2cba3eb-9a27-49a0-a3e6-645a8853c027","Type":"ContainerDied","Data":"ec80b085afc1e9025e262f77c6de6c00c3a16f42f250a71784befbdaa9ec0cab"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.878851 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" event={"ID":"c2cba3eb-9a27-49a0-a3e6-645a8853c027","Type":"ContainerDied","Data":"828271507f6fc03318ff392209235e18427f8b53ccd91dc2bcc360db0f0827e8"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.878781 5110 generic.go:358] "Generic (PLEG): container finished" podID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerID="09bb5b8ad1cf0163b24ae49c3066899e402857a59abfbd22df10acfb93b7bf8e" exitCode=143 Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.878894 5110 scope.go:117] "RemoveContainer" containerID="ec80b085afc1e9025e262f77c6de6c00c3a16f42f250a71784befbdaa9ec0cab" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.878907 5110 generic.go:358] "Generic (PLEG): container finished" podID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" containerID="fca30efd8868709834f692b62596ed5648db9292a5e2f2157a4b81cbf816fa1b" exitCode=143 Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.878880 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" event={"ID":"c2cba3eb-9a27-49a0-a3e6-645a8853c027","Type":"ContainerDied","Data":"181993f3f162482e955f9cb8eac78e2a3118d3a929a19cf4b8280ea79af4af26"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.878962 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" event={"ID":"c2cba3eb-9a27-49a0-a3e6-645a8853c027","Type":"ContainerDied","Data":"acd9792db6cfdc68e20fa60da9d63fe0c78224b6bcbfc82ff654a96ac3280eb9"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.878987 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" event={"ID":"c2cba3eb-9a27-49a0-a3e6-645a8853c027","Type":"ContainerDied","Data":"f8888c50ea7c1d1cfd30dba1874ce8fabdfe27b7ffcddb77dbf0caad4fb053d0"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.879003 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" event={"ID":"c2cba3eb-9a27-49a0-a3e6-645a8853c027","Type":"ContainerDied","Data":"c84b640339be8514edb04b876af2ccb55d2474165d11ef323d55545ef3797218"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.879017 5110 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"09bb5b8ad1cf0163b24ae49c3066899e402857a59abfbd22df10acfb93b7bf8e"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.879028 5110 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fca30efd8868709834f692b62596ed5648db9292a5e2f2157a4b81cbf816fa1b"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.879037 5110 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0b89cd147747fb5fc3fe38889afc1a906d4bc2709d0fa7891d3eb28ef0935f73"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.879047 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" event={"ID":"c2cba3eb-9a27-49a0-a3e6-645a8853c027","Type":"ContainerDied","Data":"09bb5b8ad1cf0163b24ae49c3066899e402857a59abfbd22df10acfb93b7bf8e"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.879061 5110 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec80b085afc1e9025e262f77c6de6c00c3a16f42f250a71784befbdaa9ec0cab"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.879069 5110 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"828271507f6fc03318ff392209235e18427f8b53ccd91dc2bcc360db0f0827e8"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.879075 5110 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"181993f3f162482e955f9cb8eac78e2a3118d3a929a19cf4b8280ea79af4af26"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.879081 5110 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"acd9792db6cfdc68e20fa60da9d63fe0c78224b6bcbfc82ff654a96ac3280eb9"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.879087 5110 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8888c50ea7c1d1cfd30dba1874ce8fabdfe27b7ffcddb77dbf0caad4fb053d0"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.879094 5110 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c84b640339be8514edb04b876af2ccb55d2474165d11ef323d55545ef3797218"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.879100 5110 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"09bb5b8ad1cf0163b24ae49c3066899e402857a59abfbd22df10acfb93b7bf8e"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.879107 5110 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fca30efd8868709834f692b62596ed5648db9292a5e2f2157a4b81cbf816fa1b"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.879114 5110 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0b89cd147747fb5fc3fe38889afc1a906d4bc2709d0fa7891d3eb28ef0935f73"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.879123 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" event={"ID":"c2cba3eb-9a27-49a0-a3e6-645a8853c027","Type":"ContainerDied","Data":"fca30efd8868709834f692b62596ed5648db9292a5e2f2157a4b81cbf816fa1b"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.879132 5110 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec80b085afc1e9025e262f77c6de6c00c3a16f42f250a71784befbdaa9ec0cab"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.879140 5110 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"828271507f6fc03318ff392209235e18427f8b53ccd91dc2bcc360db0f0827e8"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.879148 5110 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"181993f3f162482e955f9cb8eac78e2a3118d3a929a19cf4b8280ea79af4af26"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.879153 5110 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"acd9792db6cfdc68e20fa60da9d63fe0c78224b6bcbfc82ff654a96ac3280eb9"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.879159 5110 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8888c50ea7c1d1cfd30dba1874ce8fabdfe27b7ffcddb77dbf0caad4fb053d0"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.879165 5110 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c84b640339be8514edb04b876af2ccb55d2474165d11ef323d55545ef3797218"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.879170 5110 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"09bb5b8ad1cf0163b24ae49c3066899e402857a59abfbd22df10acfb93b7bf8e"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.879183 5110 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fca30efd8868709834f692b62596ed5648db9292a5e2f2157a4b81cbf816fa1b"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.879188 5110 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0b89cd147747fb5fc3fe38889afc1a906d4bc2709d0fa7891d3eb28ef0935f73"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.879196 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" event={"ID":"c2cba3eb-9a27-49a0-a3e6-645a8853c027","Type":"ContainerDied","Data":"0846a745825adace2d108546cbd36763771fe83521c75441230842847d8e72c2"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.879206 5110 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec80b085afc1e9025e262f77c6de6c00c3a16f42f250a71784befbdaa9ec0cab"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.879211 5110 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"828271507f6fc03318ff392209235e18427f8b53ccd91dc2bcc360db0f0827e8"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.879215 5110 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"181993f3f162482e955f9cb8eac78e2a3118d3a929a19cf4b8280ea79af4af26"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.879220 5110 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"acd9792db6cfdc68e20fa60da9d63fe0c78224b6bcbfc82ff654a96ac3280eb9"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.879225 5110 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8888c50ea7c1d1cfd30dba1874ce8fabdfe27b7ffcddb77dbf0caad4fb053d0"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.879230 5110 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c84b640339be8514edb04b876af2ccb55d2474165d11ef323d55545ef3797218"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.879235 5110 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"09bb5b8ad1cf0163b24ae49c3066899e402857a59abfbd22df10acfb93b7bf8e"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.879240 5110 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fca30efd8868709834f692b62596ed5648db9292a5e2f2157a4b81cbf816fa1b"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.879244 5110 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0b89cd147747fb5fc3fe38889afc1a906d4bc2709d0fa7891d3eb28ef0935f73"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.881737 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-td7cm" event={"ID":"fbb883e9-cfc0-4055-8dcc-ce7d2550b41b","Type":"ContainerStarted","Data":"054ead4762c5bd70dcc059c12efae68f259706aab30b4d811d75f7085c49d7ea"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.881774 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-td7cm" event={"ID":"fbb883e9-cfc0-4055-8dcc-ce7d2550b41b","Type":"ContainerStarted","Data":"219f6e33d9f43928cff60ef52911b2de407f007c3b18ca80a5c07d48ba0736e2"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.881789 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-td7cm" event={"ID":"fbb883e9-cfc0-4055-8dcc-ce7d2550b41b","Type":"ContainerStarted","Data":"ad28e95d0b458a486ce6662a250c2491fbe90c77c1fec2124d9cc5b78fd9487a"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.882527 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bnkth" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.884975 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jh4hk_f2948d2b-fac7-4f3f-8b5f-f6f9c914daec/kube-multus/0.log" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.885014 5110 generic.go:358] "Generic (PLEG): container finished" podID="f2948d2b-fac7-4f3f-8b5f-f6f9c914daec" containerID="d02f14fb13f68157b9c184d3d2944b69d7c9279922e8dd561e7634655b83d0bd" exitCode=2 Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.885121 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jh4hk" event={"ID":"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec","Type":"ContainerDied","Data":"d02f14fb13f68157b9c184d3d2944b69d7c9279922e8dd561e7634655b83d0bd"} Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.885781 5110 scope.go:117] "RemoveContainer" containerID="d02f14fb13f68157b9c184d3d2944b69d7c9279922e8dd561e7634655b83d0bd" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.909291 5110 scope.go:117] "RemoveContainer" containerID="828271507f6fc03318ff392209235e18427f8b53ccd91dc2bcc360db0f0827e8" Jan 26 00:20:48 crc kubenswrapper[5110]: I0126 00:20:48.969321 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-td7cm" podStartSLOduration=1.9692966200000002 podStartE2EDuration="1.96929662s" podCreationTimestamp="2026-01-26 00:20:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:20:48.967061396 +0000 UTC m=+706.195960005" watchObservedRunningTime="2026-01-26 00:20:48.96929662 +0000 UTC m=+706.198195229" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.001889 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bnkth"] Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.011598 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bnkth"] Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.023549 5110 scope.go:117] "RemoveContainer" containerID="181993f3f162482e955f9cb8eac78e2a3118d3a929a19cf4b8280ea79af4af26" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.053217 5110 scope.go:117] "RemoveContainer" containerID="acd9792db6cfdc68e20fa60da9d63fe0c78224b6bcbfc82ff654a96ac3280eb9" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.068939 5110 scope.go:117] "RemoveContainer" containerID="f8888c50ea7c1d1cfd30dba1874ce8fabdfe27b7ffcddb77dbf0caad4fb053d0" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.082769 5110 scope.go:117] "RemoveContainer" containerID="c84b640339be8514edb04b876af2ccb55d2474165d11ef323d55545ef3797218" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.097910 5110 scope.go:117] "RemoveContainer" containerID="09bb5b8ad1cf0163b24ae49c3066899e402857a59abfbd22df10acfb93b7bf8e" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.114607 5110 scope.go:117] "RemoveContainer" containerID="fca30efd8868709834f692b62596ed5648db9292a5e2f2157a4b81cbf816fa1b" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.128408 5110 scope.go:117] "RemoveContainer" containerID="0b89cd147747fb5fc3fe38889afc1a906d4bc2709d0fa7891d3eb28ef0935f73" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.143011 5110 scope.go:117] "RemoveContainer" containerID="ec80b085afc1e9025e262f77c6de6c00c3a16f42f250a71784befbdaa9ec0cab" Jan 26 00:20:49 crc kubenswrapper[5110]: E0126 00:20:49.143461 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec80b085afc1e9025e262f77c6de6c00c3a16f42f250a71784befbdaa9ec0cab\": container with ID starting with ec80b085afc1e9025e262f77c6de6c00c3a16f42f250a71784befbdaa9ec0cab not found: ID does not exist" containerID="ec80b085afc1e9025e262f77c6de6c00c3a16f42f250a71784befbdaa9ec0cab" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.143501 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec80b085afc1e9025e262f77c6de6c00c3a16f42f250a71784befbdaa9ec0cab"} err="failed to get container status \"ec80b085afc1e9025e262f77c6de6c00c3a16f42f250a71784befbdaa9ec0cab\": rpc error: code = NotFound desc = could not find container \"ec80b085afc1e9025e262f77c6de6c00c3a16f42f250a71784befbdaa9ec0cab\": container with ID starting with ec80b085afc1e9025e262f77c6de6c00c3a16f42f250a71784befbdaa9ec0cab not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.143526 5110 scope.go:117] "RemoveContainer" containerID="828271507f6fc03318ff392209235e18427f8b53ccd91dc2bcc360db0f0827e8" Jan 26 00:20:49 crc kubenswrapper[5110]: E0126 00:20:49.144078 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"828271507f6fc03318ff392209235e18427f8b53ccd91dc2bcc360db0f0827e8\": container with ID starting with 828271507f6fc03318ff392209235e18427f8b53ccd91dc2bcc360db0f0827e8 not found: ID does not exist" containerID="828271507f6fc03318ff392209235e18427f8b53ccd91dc2bcc360db0f0827e8" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.144121 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"828271507f6fc03318ff392209235e18427f8b53ccd91dc2bcc360db0f0827e8"} err="failed to get container status \"828271507f6fc03318ff392209235e18427f8b53ccd91dc2bcc360db0f0827e8\": rpc error: code = NotFound desc = could not find container \"828271507f6fc03318ff392209235e18427f8b53ccd91dc2bcc360db0f0827e8\": container with ID starting with 828271507f6fc03318ff392209235e18427f8b53ccd91dc2bcc360db0f0827e8 not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.144152 5110 scope.go:117] "RemoveContainer" containerID="181993f3f162482e955f9cb8eac78e2a3118d3a929a19cf4b8280ea79af4af26" Jan 26 00:20:49 crc kubenswrapper[5110]: E0126 00:20:49.144580 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"181993f3f162482e955f9cb8eac78e2a3118d3a929a19cf4b8280ea79af4af26\": container with ID starting with 181993f3f162482e955f9cb8eac78e2a3118d3a929a19cf4b8280ea79af4af26 not found: ID does not exist" containerID="181993f3f162482e955f9cb8eac78e2a3118d3a929a19cf4b8280ea79af4af26" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.144609 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"181993f3f162482e955f9cb8eac78e2a3118d3a929a19cf4b8280ea79af4af26"} err="failed to get container status \"181993f3f162482e955f9cb8eac78e2a3118d3a929a19cf4b8280ea79af4af26\": rpc error: code = NotFound desc = could not find container \"181993f3f162482e955f9cb8eac78e2a3118d3a929a19cf4b8280ea79af4af26\": container with ID starting with 181993f3f162482e955f9cb8eac78e2a3118d3a929a19cf4b8280ea79af4af26 not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.144626 5110 scope.go:117] "RemoveContainer" containerID="acd9792db6cfdc68e20fa60da9d63fe0c78224b6bcbfc82ff654a96ac3280eb9" Jan 26 00:20:49 crc kubenswrapper[5110]: E0126 00:20:49.144869 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acd9792db6cfdc68e20fa60da9d63fe0c78224b6bcbfc82ff654a96ac3280eb9\": container with ID starting with acd9792db6cfdc68e20fa60da9d63fe0c78224b6bcbfc82ff654a96ac3280eb9 not found: ID does not exist" containerID="acd9792db6cfdc68e20fa60da9d63fe0c78224b6bcbfc82ff654a96ac3280eb9" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.144928 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acd9792db6cfdc68e20fa60da9d63fe0c78224b6bcbfc82ff654a96ac3280eb9"} err="failed to get container status \"acd9792db6cfdc68e20fa60da9d63fe0c78224b6bcbfc82ff654a96ac3280eb9\": rpc error: code = NotFound desc = could not find container \"acd9792db6cfdc68e20fa60da9d63fe0c78224b6bcbfc82ff654a96ac3280eb9\": container with ID starting with acd9792db6cfdc68e20fa60da9d63fe0c78224b6bcbfc82ff654a96ac3280eb9 not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.144951 5110 scope.go:117] "RemoveContainer" containerID="f8888c50ea7c1d1cfd30dba1874ce8fabdfe27b7ffcddb77dbf0caad4fb053d0" Jan 26 00:20:49 crc kubenswrapper[5110]: E0126 00:20:49.145258 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8888c50ea7c1d1cfd30dba1874ce8fabdfe27b7ffcddb77dbf0caad4fb053d0\": container with ID starting with f8888c50ea7c1d1cfd30dba1874ce8fabdfe27b7ffcddb77dbf0caad4fb053d0 not found: ID does not exist" containerID="f8888c50ea7c1d1cfd30dba1874ce8fabdfe27b7ffcddb77dbf0caad4fb053d0" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.145282 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8888c50ea7c1d1cfd30dba1874ce8fabdfe27b7ffcddb77dbf0caad4fb053d0"} err="failed to get container status \"f8888c50ea7c1d1cfd30dba1874ce8fabdfe27b7ffcddb77dbf0caad4fb053d0\": rpc error: code = NotFound desc = could not find container \"f8888c50ea7c1d1cfd30dba1874ce8fabdfe27b7ffcddb77dbf0caad4fb053d0\": container with ID starting with f8888c50ea7c1d1cfd30dba1874ce8fabdfe27b7ffcddb77dbf0caad4fb053d0 not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.145299 5110 scope.go:117] "RemoveContainer" containerID="c84b640339be8514edb04b876af2ccb55d2474165d11ef323d55545ef3797218" Jan 26 00:20:49 crc kubenswrapper[5110]: E0126 00:20:49.145809 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c84b640339be8514edb04b876af2ccb55d2474165d11ef323d55545ef3797218\": container with ID starting with c84b640339be8514edb04b876af2ccb55d2474165d11ef323d55545ef3797218 not found: ID does not exist" containerID="c84b640339be8514edb04b876af2ccb55d2474165d11ef323d55545ef3797218" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.145892 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c84b640339be8514edb04b876af2ccb55d2474165d11ef323d55545ef3797218"} err="failed to get container status \"c84b640339be8514edb04b876af2ccb55d2474165d11ef323d55545ef3797218\": rpc error: code = NotFound desc = could not find container \"c84b640339be8514edb04b876af2ccb55d2474165d11ef323d55545ef3797218\": container with ID starting with c84b640339be8514edb04b876af2ccb55d2474165d11ef323d55545ef3797218 not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.145927 5110 scope.go:117] "RemoveContainer" containerID="09bb5b8ad1cf0163b24ae49c3066899e402857a59abfbd22df10acfb93b7bf8e" Jan 26 00:20:49 crc kubenswrapper[5110]: E0126 00:20:49.146457 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09bb5b8ad1cf0163b24ae49c3066899e402857a59abfbd22df10acfb93b7bf8e\": container with ID starting with 09bb5b8ad1cf0163b24ae49c3066899e402857a59abfbd22df10acfb93b7bf8e not found: ID does not exist" containerID="09bb5b8ad1cf0163b24ae49c3066899e402857a59abfbd22df10acfb93b7bf8e" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.146482 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09bb5b8ad1cf0163b24ae49c3066899e402857a59abfbd22df10acfb93b7bf8e"} err="failed to get container status \"09bb5b8ad1cf0163b24ae49c3066899e402857a59abfbd22df10acfb93b7bf8e\": rpc error: code = NotFound desc = could not find container \"09bb5b8ad1cf0163b24ae49c3066899e402857a59abfbd22df10acfb93b7bf8e\": container with ID starting with 09bb5b8ad1cf0163b24ae49c3066899e402857a59abfbd22df10acfb93b7bf8e not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.146498 5110 scope.go:117] "RemoveContainer" containerID="fca30efd8868709834f692b62596ed5648db9292a5e2f2157a4b81cbf816fa1b" Jan 26 00:20:49 crc kubenswrapper[5110]: E0126 00:20:49.146762 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fca30efd8868709834f692b62596ed5648db9292a5e2f2157a4b81cbf816fa1b\": container with ID starting with fca30efd8868709834f692b62596ed5648db9292a5e2f2157a4b81cbf816fa1b not found: ID does not exist" containerID="fca30efd8868709834f692b62596ed5648db9292a5e2f2157a4b81cbf816fa1b" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.146789 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fca30efd8868709834f692b62596ed5648db9292a5e2f2157a4b81cbf816fa1b"} err="failed to get container status \"fca30efd8868709834f692b62596ed5648db9292a5e2f2157a4b81cbf816fa1b\": rpc error: code = NotFound desc = could not find container \"fca30efd8868709834f692b62596ed5648db9292a5e2f2157a4b81cbf816fa1b\": container with ID starting with fca30efd8868709834f692b62596ed5648db9292a5e2f2157a4b81cbf816fa1b not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.146838 5110 scope.go:117] "RemoveContainer" containerID="0b89cd147747fb5fc3fe38889afc1a906d4bc2709d0fa7891d3eb28ef0935f73" Jan 26 00:20:49 crc kubenswrapper[5110]: E0126 00:20:49.147116 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b89cd147747fb5fc3fe38889afc1a906d4bc2709d0fa7891d3eb28ef0935f73\": container with ID starting with 0b89cd147747fb5fc3fe38889afc1a906d4bc2709d0fa7891d3eb28ef0935f73 not found: ID does not exist" containerID="0b89cd147747fb5fc3fe38889afc1a906d4bc2709d0fa7891d3eb28ef0935f73" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.147143 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b89cd147747fb5fc3fe38889afc1a906d4bc2709d0fa7891d3eb28ef0935f73"} err="failed to get container status \"0b89cd147747fb5fc3fe38889afc1a906d4bc2709d0fa7891d3eb28ef0935f73\": rpc error: code = NotFound desc = could not find container \"0b89cd147747fb5fc3fe38889afc1a906d4bc2709d0fa7891d3eb28ef0935f73\": container with ID starting with 0b89cd147747fb5fc3fe38889afc1a906d4bc2709d0fa7891d3eb28ef0935f73 not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.147162 5110 scope.go:117] "RemoveContainer" containerID="ec80b085afc1e9025e262f77c6de6c00c3a16f42f250a71784befbdaa9ec0cab" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.147402 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec80b085afc1e9025e262f77c6de6c00c3a16f42f250a71784befbdaa9ec0cab"} err="failed to get container status \"ec80b085afc1e9025e262f77c6de6c00c3a16f42f250a71784befbdaa9ec0cab\": rpc error: code = NotFound desc = could not find container \"ec80b085afc1e9025e262f77c6de6c00c3a16f42f250a71784befbdaa9ec0cab\": container with ID starting with ec80b085afc1e9025e262f77c6de6c00c3a16f42f250a71784befbdaa9ec0cab not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.147436 5110 scope.go:117] "RemoveContainer" containerID="828271507f6fc03318ff392209235e18427f8b53ccd91dc2bcc360db0f0827e8" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.147752 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"828271507f6fc03318ff392209235e18427f8b53ccd91dc2bcc360db0f0827e8"} err="failed to get container status \"828271507f6fc03318ff392209235e18427f8b53ccd91dc2bcc360db0f0827e8\": rpc error: code = NotFound desc = could not find container \"828271507f6fc03318ff392209235e18427f8b53ccd91dc2bcc360db0f0827e8\": container with ID starting with 828271507f6fc03318ff392209235e18427f8b53ccd91dc2bcc360db0f0827e8 not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.147778 5110 scope.go:117] "RemoveContainer" containerID="181993f3f162482e955f9cb8eac78e2a3118d3a929a19cf4b8280ea79af4af26" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.148523 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"181993f3f162482e955f9cb8eac78e2a3118d3a929a19cf4b8280ea79af4af26"} err="failed to get container status \"181993f3f162482e955f9cb8eac78e2a3118d3a929a19cf4b8280ea79af4af26\": rpc error: code = NotFound desc = could not find container \"181993f3f162482e955f9cb8eac78e2a3118d3a929a19cf4b8280ea79af4af26\": container with ID starting with 181993f3f162482e955f9cb8eac78e2a3118d3a929a19cf4b8280ea79af4af26 not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.148546 5110 scope.go:117] "RemoveContainer" containerID="acd9792db6cfdc68e20fa60da9d63fe0c78224b6bcbfc82ff654a96ac3280eb9" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.148841 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acd9792db6cfdc68e20fa60da9d63fe0c78224b6bcbfc82ff654a96ac3280eb9"} err="failed to get container status \"acd9792db6cfdc68e20fa60da9d63fe0c78224b6bcbfc82ff654a96ac3280eb9\": rpc error: code = NotFound desc = could not find container \"acd9792db6cfdc68e20fa60da9d63fe0c78224b6bcbfc82ff654a96ac3280eb9\": container with ID starting with acd9792db6cfdc68e20fa60da9d63fe0c78224b6bcbfc82ff654a96ac3280eb9 not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.148860 5110 scope.go:117] "RemoveContainer" containerID="f8888c50ea7c1d1cfd30dba1874ce8fabdfe27b7ffcddb77dbf0caad4fb053d0" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.149142 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8888c50ea7c1d1cfd30dba1874ce8fabdfe27b7ffcddb77dbf0caad4fb053d0"} err="failed to get container status \"f8888c50ea7c1d1cfd30dba1874ce8fabdfe27b7ffcddb77dbf0caad4fb053d0\": rpc error: code = NotFound desc = could not find container \"f8888c50ea7c1d1cfd30dba1874ce8fabdfe27b7ffcddb77dbf0caad4fb053d0\": container with ID starting with f8888c50ea7c1d1cfd30dba1874ce8fabdfe27b7ffcddb77dbf0caad4fb053d0 not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.149164 5110 scope.go:117] "RemoveContainer" containerID="c84b640339be8514edb04b876af2ccb55d2474165d11ef323d55545ef3797218" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.149447 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c84b640339be8514edb04b876af2ccb55d2474165d11ef323d55545ef3797218"} err="failed to get container status \"c84b640339be8514edb04b876af2ccb55d2474165d11ef323d55545ef3797218\": rpc error: code = NotFound desc = could not find container \"c84b640339be8514edb04b876af2ccb55d2474165d11ef323d55545ef3797218\": container with ID starting with c84b640339be8514edb04b876af2ccb55d2474165d11ef323d55545ef3797218 not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.149503 5110 scope.go:117] "RemoveContainer" containerID="09bb5b8ad1cf0163b24ae49c3066899e402857a59abfbd22df10acfb93b7bf8e" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.149818 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09bb5b8ad1cf0163b24ae49c3066899e402857a59abfbd22df10acfb93b7bf8e"} err="failed to get container status \"09bb5b8ad1cf0163b24ae49c3066899e402857a59abfbd22df10acfb93b7bf8e\": rpc error: code = NotFound desc = could not find container \"09bb5b8ad1cf0163b24ae49c3066899e402857a59abfbd22df10acfb93b7bf8e\": container with ID starting with 09bb5b8ad1cf0163b24ae49c3066899e402857a59abfbd22df10acfb93b7bf8e not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.149847 5110 scope.go:117] "RemoveContainer" containerID="fca30efd8868709834f692b62596ed5648db9292a5e2f2157a4b81cbf816fa1b" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.150225 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fca30efd8868709834f692b62596ed5648db9292a5e2f2157a4b81cbf816fa1b"} err="failed to get container status \"fca30efd8868709834f692b62596ed5648db9292a5e2f2157a4b81cbf816fa1b\": rpc error: code = NotFound desc = could not find container \"fca30efd8868709834f692b62596ed5648db9292a5e2f2157a4b81cbf816fa1b\": container with ID starting with fca30efd8868709834f692b62596ed5648db9292a5e2f2157a4b81cbf816fa1b not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.150275 5110 scope.go:117] "RemoveContainer" containerID="0b89cd147747fb5fc3fe38889afc1a906d4bc2709d0fa7891d3eb28ef0935f73" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.150614 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b89cd147747fb5fc3fe38889afc1a906d4bc2709d0fa7891d3eb28ef0935f73"} err="failed to get container status \"0b89cd147747fb5fc3fe38889afc1a906d4bc2709d0fa7891d3eb28ef0935f73\": rpc error: code = NotFound desc = could not find container \"0b89cd147747fb5fc3fe38889afc1a906d4bc2709d0fa7891d3eb28ef0935f73\": container with ID starting with 0b89cd147747fb5fc3fe38889afc1a906d4bc2709d0fa7891d3eb28ef0935f73 not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.150656 5110 scope.go:117] "RemoveContainer" containerID="ec80b085afc1e9025e262f77c6de6c00c3a16f42f250a71784befbdaa9ec0cab" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.150946 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec80b085afc1e9025e262f77c6de6c00c3a16f42f250a71784befbdaa9ec0cab"} err="failed to get container status \"ec80b085afc1e9025e262f77c6de6c00c3a16f42f250a71784befbdaa9ec0cab\": rpc error: code = NotFound desc = could not find container \"ec80b085afc1e9025e262f77c6de6c00c3a16f42f250a71784befbdaa9ec0cab\": container with ID starting with ec80b085afc1e9025e262f77c6de6c00c3a16f42f250a71784befbdaa9ec0cab not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.150976 5110 scope.go:117] "RemoveContainer" containerID="828271507f6fc03318ff392209235e18427f8b53ccd91dc2bcc360db0f0827e8" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.151340 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"828271507f6fc03318ff392209235e18427f8b53ccd91dc2bcc360db0f0827e8"} err="failed to get container status \"828271507f6fc03318ff392209235e18427f8b53ccd91dc2bcc360db0f0827e8\": rpc error: code = NotFound desc = could not find container \"828271507f6fc03318ff392209235e18427f8b53ccd91dc2bcc360db0f0827e8\": container with ID starting with 828271507f6fc03318ff392209235e18427f8b53ccd91dc2bcc360db0f0827e8 not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.151361 5110 scope.go:117] "RemoveContainer" containerID="181993f3f162482e955f9cb8eac78e2a3118d3a929a19cf4b8280ea79af4af26" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.151599 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"181993f3f162482e955f9cb8eac78e2a3118d3a929a19cf4b8280ea79af4af26"} err="failed to get container status \"181993f3f162482e955f9cb8eac78e2a3118d3a929a19cf4b8280ea79af4af26\": rpc error: code = NotFound desc = could not find container \"181993f3f162482e955f9cb8eac78e2a3118d3a929a19cf4b8280ea79af4af26\": container with ID starting with 181993f3f162482e955f9cb8eac78e2a3118d3a929a19cf4b8280ea79af4af26 not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.151618 5110 scope.go:117] "RemoveContainer" containerID="acd9792db6cfdc68e20fa60da9d63fe0c78224b6bcbfc82ff654a96ac3280eb9" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.151908 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acd9792db6cfdc68e20fa60da9d63fe0c78224b6bcbfc82ff654a96ac3280eb9"} err="failed to get container status \"acd9792db6cfdc68e20fa60da9d63fe0c78224b6bcbfc82ff654a96ac3280eb9\": rpc error: code = NotFound desc = could not find container \"acd9792db6cfdc68e20fa60da9d63fe0c78224b6bcbfc82ff654a96ac3280eb9\": container with ID starting with acd9792db6cfdc68e20fa60da9d63fe0c78224b6bcbfc82ff654a96ac3280eb9 not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.151927 5110 scope.go:117] "RemoveContainer" containerID="f8888c50ea7c1d1cfd30dba1874ce8fabdfe27b7ffcddb77dbf0caad4fb053d0" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.152123 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8888c50ea7c1d1cfd30dba1874ce8fabdfe27b7ffcddb77dbf0caad4fb053d0"} err="failed to get container status \"f8888c50ea7c1d1cfd30dba1874ce8fabdfe27b7ffcddb77dbf0caad4fb053d0\": rpc error: code = NotFound desc = could not find container \"f8888c50ea7c1d1cfd30dba1874ce8fabdfe27b7ffcddb77dbf0caad4fb053d0\": container with ID starting with f8888c50ea7c1d1cfd30dba1874ce8fabdfe27b7ffcddb77dbf0caad4fb053d0 not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.152141 5110 scope.go:117] "RemoveContainer" containerID="c84b640339be8514edb04b876af2ccb55d2474165d11ef323d55545ef3797218" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.152519 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c84b640339be8514edb04b876af2ccb55d2474165d11ef323d55545ef3797218"} err="failed to get container status \"c84b640339be8514edb04b876af2ccb55d2474165d11ef323d55545ef3797218\": rpc error: code = NotFound desc = could not find container \"c84b640339be8514edb04b876af2ccb55d2474165d11ef323d55545ef3797218\": container with ID starting with c84b640339be8514edb04b876af2ccb55d2474165d11ef323d55545ef3797218 not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.152540 5110 scope.go:117] "RemoveContainer" containerID="09bb5b8ad1cf0163b24ae49c3066899e402857a59abfbd22df10acfb93b7bf8e" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.152861 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09bb5b8ad1cf0163b24ae49c3066899e402857a59abfbd22df10acfb93b7bf8e"} err="failed to get container status \"09bb5b8ad1cf0163b24ae49c3066899e402857a59abfbd22df10acfb93b7bf8e\": rpc error: code = NotFound desc = could not find container \"09bb5b8ad1cf0163b24ae49c3066899e402857a59abfbd22df10acfb93b7bf8e\": container with ID starting with 09bb5b8ad1cf0163b24ae49c3066899e402857a59abfbd22df10acfb93b7bf8e not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.152885 5110 scope.go:117] "RemoveContainer" containerID="fca30efd8868709834f692b62596ed5648db9292a5e2f2157a4b81cbf816fa1b" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.153104 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fca30efd8868709834f692b62596ed5648db9292a5e2f2157a4b81cbf816fa1b"} err="failed to get container status \"fca30efd8868709834f692b62596ed5648db9292a5e2f2157a4b81cbf816fa1b\": rpc error: code = NotFound desc = could not find container \"fca30efd8868709834f692b62596ed5648db9292a5e2f2157a4b81cbf816fa1b\": container with ID starting with fca30efd8868709834f692b62596ed5648db9292a5e2f2157a4b81cbf816fa1b not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.153121 5110 scope.go:117] "RemoveContainer" containerID="0b89cd147747fb5fc3fe38889afc1a906d4bc2709d0fa7891d3eb28ef0935f73" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.153306 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b89cd147747fb5fc3fe38889afc1a906d4bc2709d0fa7891d3eb28ef0935f73"} err="failed to get container status \"0b89cd147747fb5fc3fe38889afc1a906d4bc2709d0fa7891d3eb28ef0935f73\": rpc error: code = NotFound desc = could not find container \"0b89cd147747fb5fc3fe38889afc1a906d4bc2709d0fa7891d3eb28ef0935f73\": container with ID starting with 0b89cd147747fb5fc3fe38889afc1a906d4bc2709d0fa7891d3eb28ef0935f73 not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.153324 5110 scope.go:117] "RemoveContainer" containerID="ec80b085afc1e9025e262f77c6de6c00c3a16f42f250a71784befbdaa9ec0cab" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.153650 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec80b085afc1e9025e262f77c6de6c00c3a16f42f250a71784befbdaa9ec0cab"} err="failed to get container status \"ec80b085afc1e9025e262f77c6de6c00c3a16f42f250a71784befbdaa9ec0cab\": rpc error: code = NotFound desc = could not find container \"ec80b085afc1e9025e262f77c6de6c00c3a16f42f250a71784befbdaa9ec0cab\": container with ID starting with ec80b085afc1e9025e262f77c6de6c00c3a16f42f250a71784befbdaa9ec0cab not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.153669 5110 scope.go:117] "RemoveContainer" containerID="828271507f6fc03318ff392209235e18427f8b53ccd91dc2bcc360db0f0827e8" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.154020 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"828271507f6fc03318ff392209235e18427f8b53ccd91dc2bcc360db0f0827e8"} err="failed to get container status \"828271507f6fc03318ff392209235e18427f8b53ccd91dc2bcc360db0f0827e8\": rpc error: code = NotFound desc = could not find container \"828271507f6fc03318ff392209235e18427f8b53ccd91dc2bcc360db0f0827e8\": container with ID starting with 828271507f6fc03318ff392209235e18427f8b53ccd91dc2bcc360db0f0827e8 not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.154039 5110 scope.go:117] "RemoveContainer" containerID="181993f3f162482e955f9cb8eac78e2a3118d3a929a19cf4b8280ea79af4af26" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.154261 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"181993f3f162482e955f9cb8eac78e2a3118d3a929a19cf4b8280ea79af4af26"} err="failed to get container status \"181993f3f162482e955f9cb8eac78e2a3118d3a929a19cf4b8280ea79af4af26\": rpc error: code = NotFound desc = could not find container \"181993f3f162482e955f9cb8eac78e2a3118d3a929a19cf4b8280ea79af4af26\": container with ID starting with 181993f3f162482e955f9cb8eac78e2a3118d3a929a19cf4b8280ea79af4af26 not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.154279 5110 scope.go:117] "RemoveContainer" containerID="acd9792db6cfdc68e20fa60da9d63fe0c78224b6bcbfc82ff654a96ac3280eb9" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.154505 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acd9792db6cfdc68e20fa60da9d63fe0c78224b6bcbfc82ff654a96ac3280eb9"} err="failed to get container status \"acd9792db6cfdc68e20fa60da9d63fe0c78224b6bcbfc82ff654a96ac3280eb9\": rpc error: code = NotFound desc = could not find container \"acd9792db6cfdc68e20fa60da9d63fe0c78224b6bcbfc82ff654a96ac3280eb9\": container with ID starting with acd9792db6cfdc68e20fa60da9d63fe0c78224b6bcbfc82ff654a96ac3280eb9 not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.154525 5110 scope.go:117] "RemoveContainer" containerID="f8888c50ea7c1d1cfd30dba1874ce8fabdfe27b7ffcddb77dbf0caad4fb053d0" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.154740 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8888c50ea7c1d1cfd30dba1874ce8fabdfe27b7ffcddb77dbf0caad4fb053d0"} err="failed to get container status \"f8888c50ea7c1d1cfd30dba1874ce8fabdfe27b7ffcddb77dbf0caad4fb053d0\": rpc error: code = NotFound desc = could not find container \"f8888c50ea7c1d1cfd30dba1874ce8fabdfe27b7ffcddb77dbf0caad4fb053d0\": container with ID starting with f8888c50ea7c1d1cfd30dba1874ce8fabdfe27b7ffcddb77dbf0caad4fb053d0 not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.154758 5110 scope.go:117] "RemoveContainer" containerID="c84b640339be8514edb04b876af2ccb55d2474165d11ef323d55545ef3797218" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.154993 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c84b640339be8514edb04b876af2ccb55d2474165d11ef323d55545ef3797218"} err="failed to get container status \"c84b640339be8514edb04b876af2ccb55d2474165d11ef323d55545ef3797218\": rpc error: code = NotFound desc = could not find container \"c84b640339be8514edb04b876af2ccb55d2474165d11ef323d55545ef3797218\": container with ID starting with c84b640339be8514edb04b876af2ccb55d2474165d11ef323d55545ef3797218 not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.155021 5110 scope.go:117] "RemoveContainer" containerID="09bb5b8ad1cf0163b24ae49c3066899e402857a59abfbd22df10acfb93b7bf8e" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.155219 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09bb5b8ad1cf0163b24ae49c3066899e402857a59abfbd22df10acfb93b7bf8e"} err="failed to get container status \"09bb5b8ad1cf0163b24ae49c3066899e402857a59abfbd22df10acfb93b7bf8e\": rpc error: code = NotFound desc = could not find container \"09bb5b8ad1cf0163b24ae49c3066899e402857a59abfbd22df10acfb93b7bf8e\": container with ID starting with 09bb5b8ad1cf0163b24ae49c3066899e402857a59abfbd22df10acfb93b7bf8e not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.155238 5110 scope.go:117] "RemoveContainer" containerID="fca30efd8868709834f692b62596ed5648db9292a5e2f2157a4b81cbf816fa1b" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.155495 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fca30efd8868709834f692b62596ed5648db9292a5e2f2157a4b81cbf816fa1b"} err="failed to get container status \"fca30efd8868709834f692b62596ed5648db9292a5e2f2157a4b81cbf816fa1b\": rpc error: code = NotFound desc = could not find container \"fca30efd8868709834f692b62596ed5648db9292a5e2f2157a4b81cbf816fa1b\": container with ID starting with fca30efd8868709834f692b62596ed5648db9292a5e2f2157a4b81cbf816fa1b not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.155519 5110 scope.go:117] "RemoveContainer" containerID="0b89cd147747fb5fc3fe38889afc1a906d4bc2709d0fa7891d3eb28ef0935f73" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.155765 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b89cd147747fb5fc3fe38889afc1a906d4bc2709d0fa7891d3eb28ef0935f73"} err="failed to get container status \"0b89cd147747fb5fc3fe38889afc1a906d4bc2709d0fa7891d3eb28ef0935f73\": rpc error: code = NotFound desc = could not find container \"0b89cd147747fb5fc3fe38889afc1a906d4bc2709d0fa7891d3eb28ef0935f73\": container with ID starting with 0b89cd147747fb5fc3fe38889afc1a906d4bc2709d0fa7891d3eb28ef0935f73 not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.155784 5110 scope.go:117] "RemoveContainer" containerID="ec80b085afc1e9025e262f77c6de6c00c3a16f42f250a71784befbdaa9ec0cab" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.155964 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec80b085afc1e9025e262f77c6de6c00c3a16f42f250a71784befbdaa9ec0cab"} err="failed to get container status \"ec80b085afc1e9025e262f77c6de6c00c3a16f42f250a71784befbdaa9ec0cab\": rpc error: code = NotFound desc = could not find container \"ec80b085afc1e9025e262f77c6de6c00c3a16f42f250a71784befbdaa9ec0cab\": container with ID starting with ec80b085afc1e9025e262f77c6de6c00c3a16f42f250a71784befbdaa9ec0cab not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.155982 5110 scope.go:117] "RemoveContainer" containerID="828271507f6fc03318ff392209235e18427f8b53ccd91dc2bcc360db0f0827e8" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.156278 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"828271507f6fc03318ff392209235e18427f8b53ccd91dc2bcc360db0f0827e8"} err="failed to get container status \"828271507f6fc03318ff392209235e18427f8b53ccd91dc2bcc360db0f0827e8\": rpc error: code = NotFound desc = could not find container \"828271507f6fc03318ff392209235e18427f8b53ccd91dc2bcc360db0f0827e8\": container with ID starting with 828271507f6fc03318ff392209235e18427f8b53ccd91dc2bcc360db0f0827e8 not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.156296 5110 scope.go:117] "RemoveContainer" containerID="181993f3f162482e955f9cb8eac78e2a3118d3a929a19cf4b8280ea79af4af26" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.156460 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"181993f3f162482e955f9cb8eac78e2a3118d3a929a19cf4b8280ea79af4af26"} err="failed to get container status \"181993f3f162482e955f9cb8eac78e2a3118d3a929a19cf4b8280ea79af4af26\": rpc error: code = NotFound desc = could not find container \"181993f3f162482e955f9cb8eac78e2a3118d3a929a19cf4b8280ea79af4af26\": container with ID starting with 181993f3f162482e955f9cb8eac78e2a3118d3a929a19cf4b8280ea79af4af26 not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.156492 5110 scope.go:117] "RemoveContainer" containerID="acd9792db6cfdc68e20fa60da9d63fe0c78224b6bcbfc82ff654a96ac3280eb9" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.156771 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acd9792db6cfdc68e20fa60da9d63fe0c78224b6bcbfc82ff654a96ac3280eb9"} err="failed to get container status \"acd9792db6cfdc68e20fa60da9d63fe0c78224b6bcbfc82ff654a96ac3280eb9\": rpc error: code = NotFound desc = could not find container \"acd9792db6cfdc68e20fa60da9d63fe0c78224b6bcbfc82ff654a96ac3280eb9\": container with ID starting with acd9792db6cfdc68e20fa60da9d63fe0c78224b6bcbfc82ff654a96ac3280eb9 not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.156789 5110 scope.go:117] "RemoveContainer" containerID="f8888c50ea7c1d1cfd30dba1874ce8fabdfe27b7ffcddb77dbf0caad4fb053d0" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.157027 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8888c50ea7c1d1cfd30dba1874ce8fabdfe27b7ffcddb77dbf0caad4fb053d0"} err="failed to get container status \"f8888c50ea7c1d1cfd30dba1874ce8fabdfe27b7ffcddb77dbf0caad4fb053d0\": rpc error: code = NotFound desc = could not find container \"f8888c50ea7c1d1cfd30dba1874ce8fabdfe27b7ffcddb77dbf0caad4fb053d0\": container with ID starting with f8888c50ea7c1d1cfd30dba1874ce8fabdfe27b7ffcddb77dbf0caad4fb053d0 not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.157044 5110 scope.go:117] "RemoveContainer" containerID="c84b640339be8514edb04b876af2ccb55d2474165d11ef323d55545ef3797218" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.157277 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c84b640339be8514edb04b876af2ccb55d2474165d11ef323d55545ef3797218"} err="failed to get container status \"c84b640339be8514edb04b876af2ccb55d2474165d11ef323d55545ef3797218\": rpc error: code = NotFound desc = could not find container \"c84b640339be8514edb04b876af2ccb55d2474165d11ef323d55545ef3797218\": container with ID starting with c84b640339be8514edb04b876af2ccb55d2474165d11ef323d55545ef3797218 not found: ID does not exist" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.324400 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2cba3eb-9a27-49a0-a3e6-645a8853c027" path="/var/lib/kubelet/pods/c2cba3eb-9a27-49a0-a3e6-645a8853c027/volumes" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.325912 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f11dce8d-124f-497f-96a2-11dd1dddd26d" path="/var/lib/kubelet/pods/f11dce8d-124f-497f-96a2-11dd1dddd26d/volumes" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.896073 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jh4hk_f2948d2b-fac7-4f3f-8b5f-f6f9c914daec/kube-multus/0.log" Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.896262 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jh4hk" event={"ID":"f2948d2b-fac7-4f3f-8b5f-f6f9c914daec","Type":"ContainerStarted","Data":"ffdc252aa5b002c9d4b08e9d4679f48a15641983ede8deb071115a4df3a2f671"} Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.900276 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" event={"ID":"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949","Type":"ContainerStarted","Data":"85a343be4133186f286616f8fb08244fc03e407dac7e44b2a4fa130375a23fe9"} Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.900300 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" event={"ID":"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949","Type":"ContainerStarted","Data":"d116e77bcf10171b2563b0db96d5e050e5f2e6e55d50b3108eb9708a23b53707"} Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.900311 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" event={"ID":"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949","Type":"ContainerStarted","Data":"760b63af97aadbffd00275ea62ddfc5d85a24e70e5025cf15a052e79ad033da5"} Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.900321 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" event={"ID":"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949","Type":"ContainerStarted","Data":"dc89359e028d8c89ccc9ab0e415f82b6d625a3a7b21d9a96ba107d8d5f972e10"} Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.900331 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" event={"ID":"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949","Type":"ContainerStarted","Data":"32dd901f7714938fc4750f785e33fa69991ac6ed345a54bed27d3ad8792bd6e7"} Jan 26 00:20:49 crc kubenswrapper[5110]: I0126 00:20:49.900340 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" event={"ID":"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949","Type":"ContainerStarted","Data":"a5c9102bf41a7e7d3ceb43441a13e7a45febedc5c9c488aaadc7096dddc50baf"} Jan 26 00:20:51 crc kubenswrapper[5110]: I0126 00:20:51.922923 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" event={"ID":"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949","Type":"ContainerStarted","Data":"c840ec5e7c7889d48d4a1ccf112ddeadab798a9fbb159d637ebbf8248e9d490c"} Jan 26 00:20:54 crc kubenswrapper[5110]: I0126 00:20:54.948085 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" event={"ID":"618a4b44-a8e1-4fec-bb7c-83ed2e1bf949","Type":"ContainerStarted","Data":"d71b694296ab62e6289691cfcbe2b34bbfaf03443dd80e8187e2275b886dedc7"} Jan 26 00:20:54 crc kubenswrapper[5110]: I0126 00:20:54.949022 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:54 crc kubenswrapper[5110]: I0126 00:20:54.949042 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:54 crc kubenswrapper[5110]: I0126 00:20:54.949055 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:54 crc kubenswrapper[5110]: I0126 00:20:54.986434 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:54 crc kubenswrapper[5110]: I0126 00:20:54.990746 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" podStartSLOduration=6.990727684 podStartE2EDuration="6.990727684s" podCreationTimestamp="2026-01-26 00:20:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:20:54.983361184 +0000 UTC m=+712.212259803" watchObservedRunningTime="2026-01-26 00:20:54.990727684 +0000 UTC m=+712.219626293" Jan 26 00:20:54 crc kubenswrapper[5110]: I0126 00:20:54.993369 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:20:56 crc kubenswrapper[5110]: I0126 00:20:56.813946 5110 patch_prober.go:28] interesting pod/machine-config-daemon-c6tpr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:20:56 crc kubenswrapper[5110]: I0126 00:20:56.814509 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" podUID="f15bed73-d669-439f-9828-7b952d9bfe65" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:20:56 crc kubenswrapper[5110]: I0126 00:20:56.814605 5110 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" Jan 26 00:20:56 crc kubenswrapper[5110]: I0126 00:20:56.815770 5110 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8cf1a32c65b02796064cd080f35e06d7241ce6749daefe8e41aaf499a12db038"} pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 00:20:56 crc kubenswrapper[5110]: I0126 00:20:56.815928 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" podUID="f15bed73-d669-439f-9828-7b952d9bfe65" containerName="machine-config-daemon" containerID="cri-o://8cf1a32c65b02796064cd080f35e06d7241ce6749daefe8e41aaf499a12db038" gracePeriod=600 Jan 26 00:20:57 crc kubenswrapper[5110]: I0126 00:20:57.968411 5110 generic.go:358] "Generic (PLEG): container finished" podID="f15bed73-d669-439f-9828-7b952d9bfe65" containerID="8cf1a32c65b02796064cd080f35e06d7241ce6749daefe8e41aaf499a12db038" exitCode=0 Jan 26 00:20:57 crc kubenswrapper[5110]: I0126 00:20:57.968485 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" event={"ID":"f15bed73-d669-439f-9828-7b952d9bfe65","Type":"ContainerDied","Data":"8cf1a32c65b02796064cd080f35e06d7241ce6749daefe8e41aaf499a12db038"} Jan 26 00:20:57 crc kubenswrapper[5110]: I0126 00:20:57.968963 5110 scope.go:117] "RemoveContainer" containerID="4060a0b2df1e9f152ebc91345ed0924c6529a54f677fc892640b96180c61050a" Jan 26 00:20:58 crc kubenswrapper[5110]: I0126 00:20:58.981270 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" event={"ID":"f15bed73-d669-439f-9828-7b952d9bfe65","Type":"ContainerStarted","Data":"f81b2a95bda7dc33d66a95aaff56fe2d3c57ed6f0906b6eb9d8b3b10b83a4ccf"} Jan 26 00:21:26 crc kubenswrapper[5110]: I0126 00:21:26.980644 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xdrtz" Jan 26 00:21:55 crc kubenswrapper[5110]: I0126 00:21:55.023364 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nnd4t"] Jan 26 00:21:55 crc kubenswrapper[5110]: I0126 00:21:55.026064 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nnd4t" podUID="304af444-ca8b-464c-a6dd-e4aca996cb53" containerName="registry-server" containerID="cri-o://9199986e21190916de438833427ee11059dd0b2dbaadc6131aba4914f2788d37" gracePeriod=30 Jan 26 00:21:55 crc kubenswrapper[5110]: I0126 00:21:55.365283 5110 generic.go:358] "Generic (PLEG): container finished" podID="304af444-ca8b-464c-a6dd-e4aca996cb53" containerID="9199986e21190916de438833427ee11059dd0b2dbaadc6131aba4914f2788d37" exitCode=0 Jan 26 00:21:55 crc kubenswrapper[5110]: I0126 00:21:55.365369 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nnd4t" event={"ID":"304af444-ca8b-464c-a6dd-e4aca996cb53","Type":"ContainerDied","Data":"9199986e21190916de438833427ee11059dd0b2dbaadc6131aba4914f2788d37"} Jan 26 00:21:55 crc kubenswrapper[5110]: I0126 00:21:55.412994 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nnd4t" Jan 26 00:21:55 crc kubenswrapper[5110]: I0126 00:21:55.493431 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/304af444-ca8b-464c-a6dd-e4aca996cb53-catalog-content\") pod \"304af444-ca8b-464c-a6dd-e4aca996cb53\" (UID: \"304af444-ca8b-464c-a6dd-e4aca996cb53\") " Jan 26 00:21:55 crc kubenswrapper[5110]: I0126 00:21:55.493658 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/304af444-ca8b-464c-a6dd-e4aca996cb53-utilities\") pod \"304af444-ca8b-464c-a6dd-e4aca996cb53\" (UID: \"304af444-ca8b-464c-a6dd-e4aca996cb53\") " Jan 26 00:21:55 crc kubenswrapper[5110]: I0126 00:21:55.493722 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t66rb\" (UniqueName: \"kubernetes.io/projected/304af444-ca8b-464c-a6dd-e4aca996cb53-kube-api-access-t66rb\") pod \"304af444-ca8b-464c-a6dd-e4aca996cb53\" (UID: \"304af444-ca8b-464c-a6dd-e4aca996cb53\") " Jan 26 00:21:55 crc kubenswrapper[5110]: I0126 00:21:55.495093 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/304af444-ca8b-464c-a6dd-e4aca996cb53-utilities" (OuterVolumeSpecName: "utilities") pod "304af444-ca8b-464c-a6dd-e4aca996cb53" (UID: "304af444-ca8b-464c-a6dd-e4aca996cb53"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:21:55 crc kubenswrapper[5110]: I0126 00:21:55.500065 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/304af444-ca8b-464c-a6dd-e4aca996cb53-kube-api-access-t66rb" (OuterVolumeSpecName: "kube-api-access-t66rb") pod "304af444-ca8b-464c-a6dd-e4aca996cb53" (UID: "304af444-ca8b-464c-a6dd-e4aca996cb53"). InnerVolumeSpecName "kube-api-access-t66rb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:21:55 crc kubenswrapper[5110]: I0126 00:21:55.507811 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/304af444-ca8b-464c-a6dd-e4aca996cb53-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "304af444-ca8b-464c-a6dd-e4aca996cb53" (UID: "304af444-ca8b-464c-a6dd-e4aca996cb53"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:21:55 crc kubenswrapper[5110]: I0126 00:21:55.595479 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/304af444-ca8b-464c-a6dd-e4aca996cb53-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:21:55 crc kubenswrapper[5110]: I0126 00:21:55.595530 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t66rb\" (UniqueName: \"kubernetes.io/projected/304af444-ca8b-464c-a6dd-e4aca996cb53-kube-api-access-t66rb\") on node \"crc\" DevicePath \"\"" Jan 26 00:21:55 crc kubenswrapper[5110]: I0126 00:21:55.595553 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/304af444-ca8b-464c-a6dd-e4aca996cb53-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:21:56 crc kubenswrapper[5110]: I0126 00:21:56.375991 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nnd4t" event={"ID":"304af444-ca8b-464c-a6dd-e4aca996cb53","Type":"ContainerDied","Data":"75ee5b3d59ddf192a92c6165d0a0562bebe8b54092296a81da11fc98f0c8d20d"} Jan 26 00:21:56 crc kubenswrapper[5110]: I0126 00:21:56.376054 5110 scope.go:117] "RemoveContainer" containerID="9199986e21190916de438833427ee11059dd0b2dbaadc6131aba4914f2788d37" Jan 26 00:21:56 crc kubenswrapper[5110]: I0126 00:21:56.376349 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nnd4t" Jan 26 00:21:56 crc kubenswrapper[5110]: I0126 00:21:56.395634 5110 scope.go:117] "RemoveContainer" containerID="d6a90c098a2b93f956bb0f46458c913d988e2ad17baf4714c12c288e5080bc66" Jan 26 00:21:56 crc kubenswrapper[5110]: I0126 00:21:56.408278 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nnd4t"] Jan 26 00:21:56 crc kubenswrapper[5110]: I0126 00:21:56.412321 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nnd4t"] Jan 26 00:21:56 crc kubenswrapper[5110]: I0126 00:21:56.435220 5110 scope.go:117] "RemoveContainer" containerID="b0965f37ea7402dd449fa827f85f1ab536649f82ec63fbb0cd59dff9f3062b34" Jan 26 00:21:57 crc kubenswrapper[5110]: I0126 00:21:57.330067 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="304af444-ca8b-464c-a6dd-e4aca996cb53" path="/var/lib/kubelet/pods/304af444-ca8b-464c-a6dd-e4aca996cb53/volumes" Jan 26 00:21:58 crc kubenswrapper[5110]: I0126 00:21:58.735229 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qs2ww"] Jan 26 00:21:58 crc kubenswrapper[5110]: I0126 00:21:58.735842 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="304af444-ca8b-464c-a6dd-e4aca996cb53" containerName="registry-server" Jan 26 00:21:58 crc kubenswrapper[5110]: I0126 00:21:58.735856 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="304af444-ca8b-464c-a6dd-e4aca996cb53" containerName="registry-server" Jan 26 00:21:58 crc kubenswrapper[5110]: I0126 00:21:58.735875 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="304af444-ca8b-464c-a6dd-e4aca996cb53" containerName="extract-content" Jan 26 00:21:58 crc kubenswrapper[5110]: I0126 00:21:58.735881 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="304af444-ca8b-464c-a6dd-e4aca996cb53" containerName="extract-content" Jan 26 00:21:58 crc kubenswrapper[5110]: I0126 00:21:58.735892 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="304af444-ca8b-464c-a6dd-e4aca996cb53" containerName="extract-utilities" Jan 26 00:21:58 crc kubenswrapper[5110]: I0126 00:21:58.735898 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="304af444-ca8b-464c-a6dd-e4aca996cb53" containerName="extract-utilities" Jan 26 00:21:58 crc kubenswrapper[5110]: I0126 00:21:58.736005 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="304af444-ca8b-464c-a6dd-e4aca996cb53" containerName="registry-server" Jan 26 00:21:58 crc kubenswrapper[5110]: I0126 00:21:58.748360 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qs2ww"] Jan 26 00:21:58 crc kubenswrapper[5110]: I0126 00:21:58.748548 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qs2ww" Jan 26 00:21:58 crc kubenswrapper[5110]: I0126 00:21:58.786962 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Jan 26 00:21:58 crc kubenswrapper[5110]: I0126 00:21:58.845899 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6wl9\" (UniqueName: \"kubernetes.io/projected/f8b123a0-9bf2-4b5f-b26d-14407c464561-kube-api-access-c6wl9\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qs2ww\" (UID: \"f8b123a0-9bf2-4b5f-b26d-14407c464561\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qs2ww" Jan 26 00:21:58 crc kubenswrapper[5110]: I0126 00:21:58.846055 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f8b123a0-9bf2-4b5f-b26d-14407c464561-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qs2ww\" (UID: \"f8b123a0-9bf2-4b5f-b26d-14407c464561\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qs2ww" Jan 26 00:21:58 crc kubenswrapper[5110]: I0126 00:21:58.846327 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f8b123a0-9bf2-4b5f-b26d-14407c464561-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qs2ww\" (UID: \"f8b123a0-9bf2-4b5f-b26d-14407c464561\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qs2ww" Jan 26 00:21:58 crc kubenswrapper[5110]: I0126 00:21:58.947857 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c6wl9\" (UniqueName: \"kubernetes.io/projected/f8b123a0-9bf2-4b5f-b26d-14407c464561-kube-api-access-c6wl9\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qs2ww\" (UID: \"f8b123a0-9bf2-4b5f-b26d-14407c464561\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qs2ww" Jan 26 00:21:58 crc kubenswrapper[5110]: I0126 00:21:58.947931 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f8b123a0-9bf2-4b5f-b26d-14407c464561-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qs2ww\" (UID: \"f8b123a0-9bf2-4b5f-b26d-14407c464561\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qs2ww" Jan 26 00:21:58 crc kubenswrapper[5110]: I0126 00:21:58.947982 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f8b123a0-9bf2-4b5f-b26d-14407c464561-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qs2ww\" (UID: \"f8b123a0-9bf2-4b5f-b26d-14407c464561\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qs2ww" Jan 26 00:21:58 crc kubenswrapper[5110]: I0126 00:21:58.948485 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f8b123a0-9bf2-4b5f-b26d-14407c464561-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qs2ww\" (UID: \"f8b123a0-9bf2-4b5f-b26d-14407c464561\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qs2ww" Jan 26 00:21:58 crc kubenswrapper[5110]: I0126 00:21:58.948621 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f8b123a0-9bf2-4b5f-b26d-14407c464561-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qs2ww\" (UID: \"f8b123a0-9bf2-4b5f-b26d-14407c464561\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qs2ww" Jan 26 00:21:58 crc kubenswrapper[5110]: I0126 00:21:58.968654 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6wl9\" (UniqueName: \"kubernetes.io/projected/f8b123a0-9bf2-4b5f-b26d-14407c464561-kube-api-access-c6wl9\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qs2ww\" (UID: \"f8b123a0-9bf2-4b5f-b26d-14407c464561\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qs2ww" Jan 26 00:21:59 crc kubenswrapper[5110]: I0126 00:21:59.098780 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qs2ww" Jan 26 00:21:59 crc kubenswrapper[5110]: I0126 00:21:59.354946 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qs2ww"] Jan 26 00:21:59 crc kubenswrapper[5110]: I0126 00:21:59.408695 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qs2ww" event={"ID":"f8b123a0-9bf2-4b5f-b26d-14407c464561","Type":"ContainerStarted","Data":"818ddea60437b31e30ed456525c4bcfb613bdb055aba236a049e3c643417bbe1"} Jan 26 00:22:00 crc kubenswrapper[5110]: I0126 00:22:00.144851 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489782-dkgs5"] Jan 26 00:22:00 crc kubenswrapper[5110]: I0126 00:22:00.167196 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489782-dkgs5"] Jan 26 00:22:00 crc kubenswrapper[5110]: I0126 00:22:00.167361 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489782-dkgs5" Jan 26 00:22:00 crc kubenswrapper[5110]: I0126 00:22:00.173057 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dfffg\"" Jan 26 00:22:00 crc kubenswrapper[5110]: I0126 00:22:00.173220 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:22:00 crc kubenswrapper[5110]: I0126 00:22:00.173846 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:22:00 crc kubenswrapper[5110]: I0126 00:22:00.267556 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm6mj\" (UniqueName: \"kubernetes.io/projected/329081df-516b-4138-93f0-34cedd468e97-kube-api-access-jm6mj\") pod \"auto-csr-approver-29489782-dkgs5\" (UID: \"329081df-516b-4138-93f0-34cedd468e97\") " pod="openshift-infra/auto-csr-approver-29489782-dkgs5" Jan 26 00:22:00 crc kubenswrapper[5110]: I0126 00:22:00.368893 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jm6mj\" (UniqueName: \"kubernetes.io/projected/329081df-516b-4138-93f0-34cedd468e97-kube-api-access-jm6mj\") pod \"auto-csr-approver-29489782-dkgs5\" (UID: \"329081df-516b-4138-93f0-34cedd468e97\") " pod="openshift-infra/auto-csr-approver-29489782-dkgs5" Jan 26 00:22:00 crc kubenswrapper[5110]: I0126 00:22:00.392481 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm6mj\" (UniqueName: \"kubernetes.io/projected/329081df-516b-4138-93f0-34cedd468e97-kube-api-access-jm6mj\") pod \"auto-csr-approver-29489782-dkgs5\" (UID: \"329081df-516b-4138-93f0-34cedd468e97\") " pod="openshift-infra/auto-csr-approver-29489782-dkgs5" Jan 26 00:22:00 crc kubenswrapper[5110]: I0126 00:22:00.416465 5110 generic.go:358] "Generic (PLEG): container finished" podID="f8b123a0-9bf2-4b5f-b26d-14407c464561" containerID="f06cd9aeb40fb76ae63136611a4451110ed1b48858c8b1e94267368f85090e56" exitCode=0 Jan 26 00:22:00 crc kubenswrapper[5110]: I0126 00:22:00.416567 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qs2ww" event={"ID":"f8b123a0-9bf2-4b5f-b26d-14407c464561","Type":"ContainerDied","Data":"f06cd9aeb40fb76ae63136611a4451110ed1b48858c8b1e94267368f85090e56"} Jan 26 00:22:00 crc kubenswrapper[5110]: I0126 00:22:00.497258 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489782-dkgs5" Jan 26 00:22:00 crc kubenswrapper[5110]: I0126 00:22:00.690988 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489782-dkgs5"] Jan 26 00:22:00 crc kubenswrapper[5110]: W0126 00:22:00.699958 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod329081df_516b_4138_93f0_34cedd468e97.slice/crio-17e0b9d6348d98156af6958855dd55b3100ff808dfdce7b539d2b92d8acc011b WatchSource:0}: Error finding container 17e0b9d6348d98156af6958855dd55b3100ff808dfdce7b539d2b92d8acc011b: Status 404 returned error can't find the container with id 17e0b9d6348d98156af6958855dd55b3100ff808dfdce7b539d2b92d8acc011b Jan 26 00:22:01 crc kubenswrapper[5110]: I0126 00:22:01.427309 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489782-dkgs5" event={"ID":"329081df-516b-4138-93f0-34cedd468e97","Type":"ContainerStarted","Data":"17e0b9d6348d98156af6958855dd55b3100ff808dfdce7b539d2b92d8acc011b"} Jan 26 00:22:01 crc kubenswrapper[5110]: I0126 00:22:01.690344 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bkjxb"] Jan 26 00:22:01 crc kubenswrapper[5110]: I0126 00:22:01.702629 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bkjxb" Jan 26 00:22:01 crc kubenswrapper[5110]: I0126 00:22:01.721413 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bkjxb"] Jan 26 00:22:01 crc kubenswrapper[5110]: I0126 00:22:01.791045 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v26wk\" (UniqueName: \"kubernetes.io/projected/582e457d-8bb7-4c07-96f7-bbc77fce72cb-kube-api-access-v26wk\") pod \"redhat-operators-bkjxb\" (UID: \"582e457d-8bb7-4c07-96f7-bbc77fce72cb\") " pod="openshift-marketplace/redhat-operators-bkjxb" Jan 26 00:22:01 crc kubenswrapper[5110]: I0126 00:22:01.791105 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/582e457d-8bb7-4c07-96f7-bbc77fce72cb-utilities\") pod \"redhat-operators-bkjxb\" (UID: \"582e457d-8bb7-4c07-96f7-bbc77fce72cb\") " pod="openshift-marketplace/redhat-operators-bkjxb" Jan 26 00:22:01 crc kubenswrapper[5110]: I0126 00:22:01.791155 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/582e457d-8bb7-4c07-96f7-bbc77fce72cb-catalog-content\") pod \"redhat-operators-bkjxb\" (UID: \"582e457d-8bb7-4c07-96f7-bbc77fce72cb\") " pod="openshift-marketplace/redhat-operators-bkjxb" Jan 26 00:22:01 crc kubenswrapper[5110]: I0126 00:22:01.892771 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v26wk\" (UniqueName: \"kubernetes.io/projected/582e457d-8bb7-4c07-96f7-bbc77fce72cb-kube-api-access-v26wk\") pod \"redhat-operators-bkjxb\" (UID: \"582e457d-8bb7-4c07-96f7-bbc77fce72cb\") " pod="openshift-marketplace/redhat-operators-bkjxb" Jan 26 00:22:01 crc kubenswrapper[5110]: I0126 00:22:01.892822 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/582e457d-8bb7-4c07-96f7-bbc77fce72cb-utilities\") pod \"redhat-operators-bkjxb\" (UID: \"582e457d-8bb7-4c07-96f7-bbc77fce72cb\") " pod="openshift-marketplace/redhat-operators-bkjxb" Jan 26 00:22:01 crc kubenswrapper[5110]: I0126 00:22:01.892852 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/582e457d-8bb7-4c07-96f7-bbc77fce72cb-catalog-content\") pod \"redhat-operators-bkjxb\" (UID: \"582e457d-8bb7-4c07-96f7-bbc77fce72cb\") " pod="openshift-marketplace/redhat-operators-bkjxb" Jan 26 00:22:01 crc kubenswrapper[5110]: I0126 00:22:01.893282 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/582e457d-8bb7-4c07-96f7-bbc77fce72cb-catalog-content\") pod \"redhat-operators-bkjxb\" (UID: \"582e457d-8bb7-4c07-96f7-bbc77fce72cb\") " pod="openshift-marketplace/redhat-operators-bkjxb" Jan 26 00:22:01 crc kubenswrapper[5110]: I0126 00:22:01.893498 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/582e457d-8bb7-4c07-96f7-bbc77fce72cb-utilities\") pod \"redhat-operators-bkjxb\" (UID: \"582e457d-8bb7-4c07-96f7-bbc77fce72cb\") " pod="openshift-marketplace/redhat-operators-bkjxb" Jan 26 00:22:01 crc kubenswrapper[5110]: I0126 00:22:01.917532 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v26wk\" (UniqueName: \"kubernetes.io/projected/582e457d-8bb7-4c07-96f7-bbc77fce72cb-kube-api-access-v26wk\") pod \"redhat-operators-bkjxb\" (UID: \"582e457d-8bb7-4c07-96f7-bbc77fce72cb\") " pod="openshift-marketplace/redhat-operators-bkjxb" Jan 26 00:22:02 crc kubenswrapper[5110]: I0126 00:22:02.028547 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bkjxb" Jan 26 00:22:02 crc kubenswrapper[5110]: I0126 00:22:02.264030 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bkjxb"] Jan 26 00:22:02 crc kubenswrapper[5110]: I0126 00:22:02.433597 5110 generic.go:358] "Generic (PLEG): container finished" podID="f8b123a0-9bf2-4b5f-b26d-14407c464561" containerID="1a0b62aced74b54f210770a590b85657995c8fd94518c8ddf9bdfb659d45430a" exitCode=0 Jan 26 00:22:02 crc kubenswrapper[5110]: I0126 00:22:02.434281 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qs2ww" event={"ID":"f8b123a0-9bf2-4b5f-b26d-14407c464561","Type":"ContainerDied","Data":"1a0b62aced74b54f210770a590b85657995c8fd94518c8ddf9bdfb659d45430a"} Jan 26 00:22:02 crc kubenswrapper[5110]: I0126 00:22:02.436511 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489782-dkgs5" event={"ID":"329081df-516b-4138-93f0-34cedd468e97","Type":"ContainerStarted","Data":"e9be00fbcb791d71d3d2fef5ce96cd93b1913b6df910e94162e4d45807b4b998"} Jan 26 00:22:02 crc kubenswrapper[5110]: I0126 00:22:02.440673 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkjxb" event={"ID":"582e457d-8bb7-4c07-96f7-bbc77fce72cb","Type":"ContainerStarted","Data":"f3d247990c87a7960a7610fdc40f9d46f1436160c0cefa501869d83fbc0c6f26"} Jan 26 00:22:02 crc kubenswrapper[5110]: I0126 00:22:02.440733 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkjxb" event={"ID":"582e457d-8bb7-4c07-96f7-bbc77fce72cb","Type":"ContainerStarted","Data":"f52c1acd1ef49b9c342a9af7f7236fde2f1b4eb7fc2e757ef4bd73acf591b25a"} Jan 26 00:22:02 crc kubenswrapper[5110]: I0126 00:22:02.472615 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29489782-dkgs5" podStartSLOduration=1.325272852 podStartE2EDuration="2.472595574s" podCreationTimestamp="2026-01-26 00:22:00 +0000 UTC" firstStartedPulling="2026-01-26 00:22:00.701370763 +0000 UTC m=+777.930269372" lastFinishedPulling="2026-01-26 00:22:01.848693495 +0000 UTC m=+779.077592094" observedRunningTime="2026-01-26 00:22:02.469695781 +0000 UTC m=+779.698594410" watchObservedRunningTime="2026-01-26 00:22:02.472595574 +0000 UTC m=+779.701494183" Jan 26 00:22:03 crc kubenswrapper[5110]: I0126 00:22:03.450242 5110 generic.go:358] "Generic (PLEG): container finished" podID="f8b123a0-9bf2-4b5f-b26d-14407c464561" containerID="70db714cec6db802633740ccd8f95a93b933c28066c0bb3a3a598169c49838fe" exitCode=0 Jan 26 00:22:03 crc kubenswrapper[5110]: I0126 00:22:03.450318 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qs2ww" event={"ID":"f8b123a0-9bf2-4b5f-b26d-14407c464561","Type":"ContainerDied","Data":"70db714cec6db802633740ccd8f95a93b933c28066c0bb3a3a598169c49838fe"} Jan 26 00:22:03 crc kubenswrapper[5110]: I0126 00:22:03.453700 5110 generic.go:358] "Generic (PLEG): container finished" podID="329081df-516b-4138-93f0-34cedd468e97" containerID="e9be00fbcb791d71d3d2fef5ce96cd93b1913b6df910e94162e4d45807b4b998" exitCode=0 Jan 26 00:22:03 crc kubenswrapper[5110]: I0126 00:22:03.453851 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489782-dkgs5" event={"ID":"329081df-516b-4138-93f0-34cedd468e97","Type":"ContainerDied","Data":"e9be00fbcb791d71d3d2fef5ce96cd93b1913b6df910e94162e4d45807b4b998"} Jan 26 00:22:03 crc kubenswrapper[5110]: I0126 00:22:03.456015 5110 generic.go:358] "Generic (PLEG): container finished" podID="582e457d-8bb7-4c07-96f7-bbc77fce72cb" containerID="f3d247990c87a7960a7610fdc40f9d46f1436160c0cefa501869d83fbc0c6f26" exitCode=0 Jan 26 00:22:03 crc kubenswrapper[5110]: I0126 00:22:03.456126 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkjxb" event={"ID":"582e457d-8bb7-4c07-96f7-bbc77fce72cb","Type":"ContainerDied","Data":"f3d247990c87a7960a7610fdc40f9d46f1436160c0cefa501869d83fbc0c6f26"} Jan 26 00:22:04 crc kubenswrapper[5110]: I0126 00:22:04.466170 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkjxb" event={"ID":"582e457d-8bb7-4c07-96f7-bbc77fce72cb","Type":"ContainerStarted","Data":"d32f1d10fdcc1a158c14127f974071a5a4e2c0d177cbdaa9b52be9f550c7da45"} Jan 26 00:22:05 crc kubenswrapper[5110]: I0126 00:22:05.255707 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489782-dkgs5" Jan 26 00:22:05 crc kubenswrapper[5110]: I0126 00:22:05.260122 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qs2ww" Jan 26 00:22:05 crc kubenswrapper[5110]: I0126 00:22:05.322199 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6wl9\" (UniqueName: \"kubernetes.io/projected/f8b123a0-9bf2-4b5f-b26d-14407c464561-kube-api-access-c6wl9\") pod \"f8b123a0-9bf2-4b5f-b26d-14407c464561\" (UID: \"f8b123a0-9bf2-4b5f-b26d-14407c464561\") " Jan 26 00:22:05 crc kubenswrapper[5110]: I0126 00:22:05.322271 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f8b123a0-9bf2-4b5f-b26d-14407c464561-util\") pod \"f8b123a0-9bf2-4b5f-b26d-14407c464561\" (UID: \"f8b123a0-9bf2-4b5f-b26d-14407c464561\") " Jan 26 00:22:05 crc kubenswrapper[5110]: I0126 00:22:05.322297 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jm6mj\" (UniqueName: \"kubernetes.io/projected/329081df-516b-4138-93f0-34cedd468e97-kube-api-access-jm6mj\") pod \"329081df-516b-4138-93f0-34cedd468e97\" (UID: \"329081df-516b-4138-93f0-34cedd468e97\") " Jan 26 00:22:05 crc kubenswrapper[5110]: I0126 00:22:05.322376 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f8b123a0-9bf2-4b5f-b26d-14407c464561-bundle\") pod \"f8b123a0-9bf2-4b5f-b26d-14407c464561\" (UID: \"f8b123a0-9bf2-4b5f-b26d-14407c464561\") " Jan 26 00:22:05 crc kubenswrapper[5110]: I0126 00:22:05.325306 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8b123a0-9bf2-4b5f-b26d-14407c464561-bundle" (OuterVolumeSpecName: "bundle") pod "f8b123a0-9bf2-4b5f-b26d-14407c464561" (UID: "f8b123a0-9bf2-4b5f-b26d-14407c464561"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:22:05 crc kubenswrapper[5110]: I0126 00:22:05.331181 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/329081df-516b-4138-93f0-34cedd468e97-kube-api-access-jm6mj" (OuterVolumeSpecName: "kube-api-access-jm6mj") pod "329081df-516b-4138-93f0-34cedd468e97" (UID: "329081df-516b-4138-93f0-34cedd468e97"). InnerVolumeSpecName "kube-api-access-jm6mj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:22:05 crc kubenswrapper[5110]: I0126 00:22:05.331998 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8b123a0-9bf2-4b5f-b26d-14407c464561-util" (OuterVolumeSpecName: "util") pod "f8b123a0-9bf2-4b5f-b26d-14407c464561" (UID: "f8b123a0-9bf2-4b5f-b26d-14407c464561"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:22:05 crc kubenswrapper[5110]: I0126 00:22:05.345184 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8b123a0-9bf2-4b5f-b26d-14407c464561-kube-api-access-c6wl9" (OuterVolumeSpecName: "kube-api-access-c6wl9") pod "f8b123a0-9bf2-4b5f-b26d-14407c464561" (UID: "f8b123a0-9bf2-4b5f-b26d-14407c464561"). InnerVolumeSpecName "kube-api-access-c6wl9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:22:05 crc kubenswrapper[5110]: I0126 00:22:05.423656 5110 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f8b123a0-9bf2-4b5f-b26d-14407c464561-util\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:05 crc kubenswrapper[5110]: I0126 00:22:05.423715 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jm6mj\" (UniqueName: \"kubernetes.io/projected/329081df-516b-4138-93f0-34cedd468e97-kube-api-access-jm6mj\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:05 crc kubenswrapper[5110]: I0126 00:22:05.423738 5110 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f8b123a0-9bf2-4b5f-b26d-14407c464561-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:05 crc kubenswrapper[5110]: I0126 00:22:05.423755 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c6wl9\" (UniqueName: \"kubernetes.io/projected/f8b123a0-9bf2-4b5f-b26d-14407c464561-kube-api-access-c6wl9\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:05 crc kubenswrapper[5110]: I0126 00:22:05.477528 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qs2ww" event={"ID":"f8b123a0-9bf2-4b5f-b26d-14407c464561","Type":"ContainerDied","Data":"818ddea60437b31e30ed456525c4bcfb613bdb055aba236a049e3c643417bbe1"} Jan 26 00:22:05 crc kubenswrapper[5110]: I0126 00:22:05.477584 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="818ddea60437b31e30ed456525c4bcfb613bdb055aba236a049e3c643417bbe1" Jan 26 00:22:05 crc kubenswrapper[5110]: I0126 00:22:05.477697 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qs2ww" Jan 26 00:22:05 crc kubenswrapper[5110]: I0126 00:22:05.502073 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489782-dkgs5" Jan 26 00:22:05 crc kubenswrapper[5110]: I0126 00:22:05.502133 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489782-dkgs5" event={"ID":"329081df-516b-4138-93f0-34cedd468e97","Type":"ContainerDied","Data":"17e0b9d6348d98156af6958855dd55b3100ff808dfdce7b539d2b92d8acc011b"} Jan 26 00:22:05 crc kubenswrapper[5110]: I0126 00:22:05.502207 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17e0b9d6348d98156af6958855dd55b3100ff808dfdce7b539d2b92d8acc011b" Jan 26 00:22:05 crc kubenswrapper[5110]: I0126 00:22:05.742352 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fh4pfh"] Jan 26 00:22:05 crc kubenswrapper[5110]: I0126 00:22:05.743152 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="329081df-516b-4138-93f0-34cedd468e97" containerName="oc" Jan 26 00:22:05 crc kubenswrapper[5110]: I0126 00:22:05.743172 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="329081df-516b-4138-93f0-34cedd468e97" containerName="oc" Jan 26 00:22:05 crc kubenswrapper[5110]: I0126 00:22:05.743195 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f8b123a0-9bf2-4b5f-b26d-14407c464561" containerName="pull" Jan 26 00:22:05 crc kubenswrapper[5110]: I0126 00:22:05.743201 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8b123a0-9bf2-4b5f-b26d-14407c464561" containerName="pull" Jan 26 00:22:05 crc kubenswrapper[5110]: I0126 00:22:05.743248 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f8b123a0-9bf2-4b5f-b26d-14407c464561" containerName="extract" Jan 26 00:22:05 crc kubenswrapper[5110]: I0126 00:22:05.743254 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8b123a0-9bf2-4b5f-b26d-14407c464561" containerName="extract" Jan 26 00:22:05 crc kubenswrapper[5110]: I0126 00:22:05.743265 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f8b123a0-9bf2-4b5f-b26d-14407c464561" containerName="util" Jan 26 00:22:05 crc kubenswrapper[5110]: I0126 00:22:05.743270 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8b123a0-9bf2-4b5f-b26d-14407c464561" containerName="util" Jan 26 00:22:05 crc kubenswrapper[5110]: I0126 00:22:05.743360 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="f8b123a0-9bf2-4b5f-b26d-14407c464561" containerName="extract" Jan 26 00:22:05 crc kubenswrapper[5110]: I0126 00:22:05.743370 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="329081df-516b-4138-93f0-34cedd468e97" containerName="oc" Jan 26 00:22:05 crc kubenswrapper[5110]: I0126 00:22:05.865706 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fh4pfh"] Jan 26 00:22:05 crc kubenswrapper[5110]: I0126 00:22:05.865883 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fh4pfh" Jan 26 00:22:05 crc kubenswrapper[5110]: I0126 00:22:05.868831 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Jan 26 00:22:05 crc kubenswrapper[5110]: I0126 00:22:05.929605 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2711e85a-af93-43d8-8e4a-d6b92be4f574-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fh4pfh\" (UID: \"2711e85a-af93-43d8-8e4a-d6b92be4f574\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fh4pfh" Jan 26 00:22:05 crc kubenswrapper[5110]: I0126 00:22:05.929747 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqrsh\" (UniqueName: \"kubernetes.io/projected/2711e85a-af93-43d8-8e4a-d6b92be4f574-kube-api-access-wqrsh\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fh4pfh\" (UID: \"2711e85a-af93-43d8-8e4a-d6b92be4f574\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fh4pfh" Jan 26 00:22:05 crc kubenswrapper[5110]: I0126 00:22:05.929782 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2711e85a-af93-43d8-8e4a-d6b92be4f574-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fh4pfh\" (UID: \"2711e85a-af93-43d8-8e4a-d6b92be4f574\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fh4pfh" Jan 26 00:22:06 crc kubenswrapper[5110]: I0126 00:22:06.030640 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wqrsh\" (UniqueName: \"kubernetes.io/projected/2711e85a-af93-43d8-8e4a-d6b92be4f574-kube-api-access-wqrsh\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fh4pfh\" (UID: \"2711e85a-af93-43d8-8e4a-d6b92be4f574\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fh4pfh" Jan 26 00:22:06 crc kubenswrapper[5110]: I0126 00:22:06.031420 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2711e85a-af93-43d8-8e4a-d6b92be4f574-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fh4pfh\" (UID: \"2711e85a-af93-43d8-8e4a-d6b92be4f574\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fh4pfh" Jan 26 00:22:06 crc kubenswrapper[5110]: I0126 00:22:06.030732 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2711e85a-af93-43d8-8e4a-d6b92be4f574-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fh4pfh\" (UID: \"2711e85a-af93-43d8-8e4a-d6b92be4f574\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fh4pfh" Jan 26 00:22:06 crc kubenswrapper[5110]: I0126 00:22:06.032072 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2711e85a-af93-43d8-8e4a-d6b92be4f574-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fh4pfh\" (UID: \"2711e85a-af93-43d8-8e4a-d6b92be4f574\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fh4pfh" Jan 26 00:22:06 crc kubenswrapper[5110]: I0126 00:22:06.032157 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2711e85a-af93-43d8-8e4a-d6b92be4f574-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fh4pfh\" (UID: \"2711e85a-af93-43d8-8e4a-d6b92be4f574\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fh4pfh" Jan 26 00:22:06 crc kubenswrapper[5110]: I0126 00:22:06.068715 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqrsh\" (UniqueName: \"kubernetes.io/projected/2711e85a-af93-43d8-8e4a-d6b92be4f574-kube-api-access-wqrsh\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fh4pfh\" (UID: \"2711e85a-af93-43d8-8e4a-d6b92be4f574\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fh4pfh" Jan 26 00:22:06 crc kubenswrapper[5110]: I0126 00:22:06.187243 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fh4pfh" Jan 26 00:22:06 crc kubenswrapper[5110]: I0126 00:22:06.548573 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egjbtk"] Jan 26 00:22:06 crc kubenswrapper[5110]: I0126 00:22:06.559940 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egjbtk"] Jan 26 00:22:06 crc kubenswrapper[5110]: I0126 00:22:06.560164 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egjbtk" Jan 26 00:22:06 crc kubenswrapper[5110]: I0126 00:22:06.644780 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsdtk\" (UniqueName: \"kubernetes.io/projected/d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc-kube-api-access-lsdtk\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egjbtk\" (UID: \"d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egjbtk" Jan 26 00:22:06 crc kubenswrapper[5110]: I0126 00:22:06.644906 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egjbtk\" (UID: \"d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egjbtk" Jan 26 00:22:06 crc kubenswrapper[5110]: I0126 00:22:06.644950 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egjbtk\" (UID: \"d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egjbtk" Jan 26 00:22:06 crc kubenswrapper[5110]: I0126 00:22:06.745899 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lsdtk\" (UniqueName: \"kubernetes.io/projected/d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc-kube-api-access-lsdtk\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egjbtk\" (UID: \"d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egjbtk" Jan 26 00:22:06 crc kubenswrapper[5110]: I0126 00:22:06.746420 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egjbtk\" (UID: \"d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egjbtk" Jan 26 00:22:06 crc kubenswrapper[5110]: I0126 00:22:06.746466 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egjbtk\" (UID: \"d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egjbtk" Jan 26 00:22:06 crc kubenswrapper[5110]: I0126 00:22:06.747059 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egjbtk\" (UID: \"d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egjbtk" Jan 26 00:22:06 crc kubenswrapper[5110]: I0126 00:22:06.747308 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egjbtk\" (UID: \"d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egjbtk" Jan 26 00:22:06 crc kubenswrapper[5110]: I0126 00:22:06.771005 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsdtk\" (UniqueName: \"kubernetes.io/projected/d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc-kube-api-access-lsdtk\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egjbtk\" (UID: \"d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egjbtk" Jan 26 00:22:06 crc kubenswrapper[5110]: I0126 00:22:06.885077 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egjbtk" Jan 26 00:22:07 crc kubenswrapper[5110]: I0126 00:22:07.037929 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fh4pfh"] Jan 26 00:22:07 crc kubenswrapper[5110]: W0126 00:22:07.061347 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2711e85a_af93_43d8_8e4a_d6b92be4f574.slice/crio-a8bd0f526a47e6e7c564ca981ad6d6a43ceeb81957a770f5ded8f54aee053b76 WatchSource:0}: Error finding container a8bd0f526a47e6e7c564ca981ad6d6a43ceeb81957a770f5ded8f54aee053b76: Status 404 returned error can't find the container with id a8bd0f526a47e6e7c564ca981ad6d6a43ceeb81957a770f5ded8f54aee053b76 Jan 26 00:22:07 crc kubenswrapper[5110]: I0126 00:22:07.465774 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egjbtk"] Jan 26 00:22:07 crc kubenswrapper[5110]: I0126 00:22:07.556296 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fh4pfh" event={"ID":"2711e85a-af93-43d8-8e4a-d6b92be4f574","Type":"ContainerStarted","Data":"dd774e708b41450f216785bacf30da2385c0b5a1511f45be1b4ba760838a47c1"} Jan 26 00:22:07 crc kubenswrapper[5110]: I0126 00:22:07.556362 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fh4pfh" event={"ID":"2711e85a-af93-43d8-8e4a-d6b92be4f574","Type":"ContainerStarted","Data":"a8bd0f526a47e6e7c564ca981ad6d6a43ceeb81957a770f5ded8f54aee053b76"} Jan 26 00:22:07 crc kubenswrapper[5110]: I0126 00:22:07.567986 5110 generic.go:358] "Generic (PLEG): container finished" podID="582e457d-8bb7-4c07-96f7-bbc77fce72cb" containerID="d32f1d10fdcc1a158c14127f974071a5a4e2c0d177cbdaa9b52be9f550c7da45" exitCode=0 Jan 26 00:22:07 crc kubenswrapper[5110]: I0126 00:22:07.568121 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkjxb" event={"ID":"582e457d-8bb7-4c07-96f7-bbc77fce72cb","Type":"ContainerDied","Data":"d32f1d10fdcc1a158c14127f974071a5a4e2c0d177cbdaa9b52be9f550c7da45"} Jan 26 00:22:07 crc kubenswrapper[5110]: I0126 00:22:07.578123 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egjbtk" event={"ID":"d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc","Type":"ContainerStarted","Data":"f1afe6b5c2ba464c58fcfc1303996364b141bc6cd78e5631b8bdb0fba477276a"} Jan 26 00:22:08 crc kubenswrapper[5110]: I0126 00:22:08.585017 5110 generic.go:358] "Generic (PLEG): container finished" podID="2711e85a-af93-43d8-8e4a-d6b92be4f574" containerID="dd774e708b41450f216785bacf30da2385c0b5a1511f45be1b4ba760838a47c1" exitCode=0 Jan 26 00:22:08 crc kubenswrapper[5110]: I0126 00:22:08.585559 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fh4pfh" event={"ID":"2711e85a-af93-43d8-8e4a-d6b92be4f574","Type":"ContainerDied","Data":"dd774e708b41450f216785bacf30da2385c0b5a1511f45be1b4ba760838a47c1"} Jan 26 00:22:08 crc kubenswrapper[5110]: I0126 00:22:08.588569 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkjxb" event={"ID":"582e457d-8bb7-4c07-96f7-bbc77fce72cb","Type":"ContainerStarted","Data":"11d6e5492d73efd37b74ebb12531471d0f8f3611f651fa08270b5fb1b5809891"} Jan 26 00:22:08 crc kubenswrapper[5110]: I0126 00:22:08.590212 5110 generic.go:358] "Generic (PLEG): container finished" podID="d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc" containerID="3a445dfe50a14f5a76d98756772c5c08f82d677160ea9c69c441b38ddceba5a6" exitCode=0 Jan 26 00:22:08 crc kubenswrapper[5110]: I0126 00:22:08.590277 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egjbtk" event={"ID":"d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc","Type":"ContainerDied","Data":"3a445dfe50a14f5a76d98756772c5c08f82d677160ea9c69c441b38ddceba5a6"} Jan 26 00:22:08 crc kubenswrapper[5110]: I0126 00:22:08.629237 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bkjxb" podStartSLOduration=6.983083638 podStartE2EDuration="7.629215354s" podCreationTimestamp="2026-01-26 00:22:01 +0000 UTC" firstStartedPulling="2026-01-26 00:22:03.457520095 +0000 UTC m=+780.686418714" lastFinishedPulling="2026-01-26 00:22:04.103651781 +0000 UTC m=+781.332550430" observedRunningTime="2026-01-26 00:22:08.625693593 +0000 UTC m=+785.854592202" watchObservedRunningTime="2026-01-26 00:22:08.629215354 +0000 UTC m=+785.858113963" Jan 26 00:22:09 crc kubenswrapper[5110]: I0126 00:22:09.605589 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egjbtk" event={"ID":"d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc","Type":"ContainerStarted","Data":"8a5d3c6f6da39dc23bc3eef14e1184c6400bf06e29a0b1d34e0be104b1794ace"} Jan 26 00:22:10 crc kubenswrapper[5110]: I0126 00:22:10.634953 5110 generic.go:358] "Generic (PLEG): container finished" podID="d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc" containerID="8a5d3c6f6da39dc23bc3eef14e1184c6400bf06e29a0b1d34e0be104b1794ace" exitCode=0 Jan 26 00:22:10 crc kubenswrapper[5110]: I0126 00:22:10.635036 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egjbtk" event={"ID":"d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc","Type":"ContainerDied","Data":"8a5d3c6f6da39dc23bc3eef14e1184c6400bf06e29a0b1d34e0be104b1794ace"} Jan 26 00:22:10 crc kubenswrapper[5110]: I0126 00:22:10.637837 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fh4pfh" event={"ID":"2711e85a-af93-43d8-8e4a-d6b92be4f574","Type":"ContainerStarted","Data":"85ee5fe7b92ecb0f3ac4940090d71536ba5a7de16ec91bbd7b61392f9ca41438"} Jan 26 00:22:11 crc kubenswrapper[5110]: I0126 00:22:11.369307 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-prld8"] Jan 26 00:22:11 crc kubenswrapper[5110]: I0126 00:22:11.375649 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-prld8"] Jan 26 00:22:11 crc kubenswrapper[5110]: I0126 00:22:11.375786 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prld8" Jan 26 00:22:11 crc kubenswrapper[5110]: I0126 00:22:11.453077 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b81e6f80-8f8a-4c31-a8aa-7f52a0e09541-catalog-content\") pod \"certified-operators-prld8\" (UID: \"b81e6f80-8f8a-4c31-a8aa-7f52a0e09541\") " pod="openshift-marketplace/certified-operators-prld8" Jan 26 00:22:11 crc kubenswrapper[5110]: I0126 00:22:11.453192 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdsdw\" (UniqueName: \"kubernetes.io/projected/b81e6f80-8f8a-4c31-a8aa-7f52a0e09541-kube-api-access-gdsdw\") pod \"certified-operators-prld8\" (UID: \"b81e6f80-8f8a-4c31-a8aa-7f52a0e09541\") " pod="openshift-marketplace/certified-operators-prld8" Jan 26 00:22:11 crc kubenswrapper[5110]: I0126 00:22:11.453254 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b81e6f80-8f8a-4c31-a8aa-7f52a0e09541-utilities\") pod \"certified-operators-prld8\" (UID: \"b81e6f80-8f8a-4c31-a8aa-7f52a0e09541\") " pod="openshift-marketplace/certified-operators-prld8" Jan 26 00:22:11 crc kubenswrapper[5110]: I0126 00:22:11.554505 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b81e6f80-8f8a-4c31-a8aa-7f52a0e09541-catalog-content\") pod \"certified-operators-prld8\" (UID: \"b81e6f80-8f8a-4c31-a8aa-7f52a0e09541\") " pod="openshift-marketplace/certified-operators-prld8" Jan 26 00:22:11 crc kubenswrapper[5110]: I0126 00:22:11.554575 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gdsdw\" (UniqueName: \"kubernetes.io/projected/b81e6f80-8f8a-4c31-a8aa-7f52a0e09541-kube-api-access-gdsdw\") pod \"certified-operators-prld8\" (UID: \"b81e6f80-8f8a-4c31-a8aa-7f52a0e09541\") " pod="openshift-marketplace/certified-operators-prld8" Jan 26 00:22:11 crc kubenswrapper[5110]: I0126 00:22:11.554611 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b81e6f80-8f8a-4c31-a8aa-7f52a0e09541-utilities\") pod \"certified-operators-prld8\" (UID: \"b81e6f80-8f8a-4c31-a8aa-7f52a0e09541\") " pod="openshift-marketplace/certified-operators-prld8" Jan 26 00:22:11 crc kubenswrapper[5110]: I0126 00:22:11.555260 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b81e6f80-8f8a-4c31-a8aa-7f52a0e09541-utilities\") pod \"certified-operators-prld8\" (UID: \"b81e6f80-8f8a-4c31-a8aa-7f52a0e09541\") " pod="openshift-marketplace/certified-operators-prld8" Jan 26 00:22:11 crc kubenswrapper[5110]: I0126 00:22:11.555498 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b81e6f80-8f8a-4c31-a8aa-7f52a0e09541-catalog-content\") pod \"certified-operators-prld8\" (UID: \"b81e6f80-8f8a-4c31-a8aa-7f52a0e09541\") " pod="openshift-marketplace/certified-operators-prld8" Jan 26 00:22:11 crc kubenswrapper[5110]: I0126 00:22:11.576576 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdsdw\" (UniqueName: \"kubernetes.io/projected/b81e6f80-8f8a-4c31-a8aa-7f52a0e09541-kube-api-access-gdsdw\") pod \"certified-operators-prld8\" (UID: \"b81e6f80-8f8a-4c31-a8aa-7f52a0e09541\") " pod="openshift-marketplace/certified-operators-prld8" Jan 26 00:22:11 crc kubenswrapper[5110]: I0126 00:22:11.644707 5110 generic.go:358] "Generic (PLEG): container finished" podID="2711e85a-af93-43d8-8e4a-d6b92be4f574" containerID="85ee5fe7b92ecb0f3ac4940090d71536ba5a7de16ec91bbd7b61392f9ca41438" exitCode=0 Jan 26 00:22:11 crc kubenswrapper[5110]: I0126 00:22:11.644824 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fh4pfh" event={"ID":"2711e85a-af93-43d8-8e4a-d6b92be4f574","Type":"ContainerDied","Data":"85ee5fe7b92ecb0f3ac4940090d71536ba5a7de16ec91bbd7b61392f9ca41438"} Jan 26 00:22:11 crc kubenswrapper[5110]: I0126 00:22:11.649990 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egjbtk" event={"ID":"d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc","Type":"ContainerStarted","Data":"b6b925210186f8d18a57d035e6a53fee07b9136e04cc76fb96dba6ce93c2254b"} Jan 26 00:22:11 crc kubenswrapper[5110]: I0126 00:22:11.713501 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prld8" Jan 26 00:22:12 crc kubenswrapper[5110]: I0126 00:22:12.005652 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egjbtk" podStartSLOduration=5.212834075 podStartE2EDuration="6.005629461s" podCreationTimestamp="2026-01-26 00:22:06 +0000 UTC" firstStartedPulling="2026-01-26 00:22:08.591780142 +0000 UTC m=+785.820678761" lastFinishedPulling="2026-01-26 00:22:09.384575538 +0000 UTC m=+786.613474147" observedRunningTime="2026-01-26 00:22:11.999839455 +0000 UTC m=+789.228738074" watchObservedRunningTime="2026-01-26 00:22:12.005629461 +0000 UTC m=+789.234528070" Jan 26 00:22:12 crc kubenswrapper[5110]: I0126 00:22:12.029095 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bkjxb" Jan 26 00:22:12 crc kubenswrapper[5110]: I0126 00:22:12.029145 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-bkjxb" Jan 26 00:22:12 crc kubenswrapper[5110]: I0126 00:22:12.373451 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-prld8"] Jan 26 00:22:12 crc kubenswrapper[5110]: W0126 00:22:12.395982 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb81e6f80_8f8a_4c31_a8aa_7f52a0e09541.slice/crio-9ec6cbbb82cb529affbb4c40475e6485932d6d5cf57db7dbbf75ab8af361c2fc WatchSource:0}: Error finding container 9ec6cbbb82cb529affbb4c40475e6485932d6d5cf57db7dbbf75ab8af361c2fc: Status 404 returned error can't find the container with id 9ec6cbbb82cb529affbb4c40475e6485932d6d5cf57db7dbbf75ab8af361c2fc Jan 26 00:22:12 crc kubenswrapper[5110]: I0126 00:22:12.664110 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fh4pfh" event={"ID":"2711e85a-af93-43d8-8e4a-d6b92be4f574","Type":"ContainerStarted","Data":"0ed3196b18327e3e706b8dfedae97894438e74afe3eee40045a6df9c60aa83c1"} Jan 26 00:22:12 crc kubenswrapper[5110]: I0126 00:22:12.668322 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prld8" event={"ID":"b81e6f80-8f8a-4c31-a8aa-7f52a0e09541","Type":"ContainerStarted","Data":"9ec6cbbb82cb529affbb4c40475e6485932d6d5cf57db7dbbf75ab8af361c2fc"} Jan 26 00:22:12 crc kubenswrapper[5110]: I0126 00:22:12.670656 5110 generic.go:358] "Generic (PLEG): container finished" podID="d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc" containerID="b6b925210186f8d18a57d035e6a53fee07b9136e04cc76fb96dba6ce93c2254b" exitCode=0 Jan 26 00:22:12 crc kubenswrapper[5110]: I0126 00:22:12.670875 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egjbtk" event={"ID":"d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc","Type":"ContainerDied","Data":"b6b925210186f8d18a57d035e6a53fee07b9136e04cc76fb96dba6ce93c2254b"} Jan 26 00:22:12 crc kubenswrapper[5110]: I0126 00:22:12.702642 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fh4pfh" podStartSLOduration=6.659080665 podStartE2EDuration="7.702621483s" podCreationTimestamp="2026-01-26 00:22:05 +0000 UTC" firstStartedPulling="2026-01-26 00:22:08.587560861 +0000 UTC m=+785.816459480" lastFinishedPulling="2026-01-26 00:22:09.631101689 +0000 UTC m=+786.860000298" observedRunningTime="2026-01-26 00:22:12.697228079 +0000 UTC m=+789.926126698" watchObservedRunningTime="2026-01-26 00:22:12.702621483 +0000 UTC m=+789.931520092" Jan 26 00:22:13 crc kubenswrapper[5110]: I0126 00:22:13.116472 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bkjxb" podUID="582e457d-8bb7-4c07-96f7-bbc77fce72cb" containerName="registry-server" probeResult="failure" output=< Jan 26 00:22:13 crc kubenswrapper[5110]: timeout: failed to connect service ":50051" within 1s Jan 26 00:22:13 crc kubenswrapper[5110]: > Jan 26 00:22:13 crc kubenswrapper[5110]: I0126 00:22:13.702296 5110 generic.go:358] "Generic (PLEG): container finished" podID="2711e85a-af93-43d8-8e4a-d6b92be4f574" containerID="0ed3196b18327e3e706b8dfedae97894438e74afe3eee40045a6df9c60aa83c1" exitCode=0 Jan 26 00:22:13 crc kubenswrapper[5110]: I0126 00:22:13.702511 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fh4pfh" event={"ID":"2711e85a-af93-43d8-8e4a-d6b92be4f574","Type":"ContainerDied","Data":"0ed3196b18327e3e706b8dfedae97894438e74afe3eee40045a6df9c60aa83c1"} Jan 26 00:22:13 crc kubenswrapper[5110]: I0126 00:22:13.707754 5110 generic.go:358] "Generic (PLEG): container finished" podID="b81e6f80-8f8a-4c31-a8aa-7f52a0e09541" containerID="84f9b0ad682a55287ac9cd5bb62941f419db120b9f7ed10c92105b23e9700edb" exitCode=0 Jan 26 00:22:13 crc kubenswrapper[5110]: I0126 00:22:13.709397 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prld8" event={"ID":"b81e6f80-8f8a-4c31-a8aa-7f52a0e09541","Type":"ContainerDied","Data":"84f9b0ad682a55287ac9cd5bb62941f419db120b9f7ed10c92105b23e9700edb"} Jan 26 00:22:14 crc kubenswrapper[5110]: I0126 00:22:14.487731 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzl9z"] Jan 26 00:22:14 crc kubenswrapper[5110]: I0126 00:22:14.497844 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzl9z" Jan 26 00:22:14 crc kubenswrapper[5110]: I0126 00:22:14.601805 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/857dd7a1-3d35-47a1-b2fa-5bcee0265262-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzl9z\" (UID: \"857dd7a1-3d35-47a1-b2fa-5bcee0265262\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzl9z" Jan 26 00:22:14 crc kubenswrapper[5110]: I0126 00:22:14.601923 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/857dd7a1-3d35-47a1-b2fa-5bcee0265262-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzl9z\" (UID: \"857dd7a1-3d35-47a1-b2fa-5bcee0265262\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzl9z" Jan 26 00:22:14 crc kubenswrapper[5110]: I0126 00:22:14.601962 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfc26\" (UniqueName: \"kubernetes.io/projected/857dd7a1-3d35-47a1-b2fa-5bcee0265262-kube-api-access-pfc26\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzl9z\" (UID: \"857dd7a1-3d35-47a1-b2fa-5bcee0265262\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzl9z" Jan 26 00:22:14 crc kubenswrapper[5110]: I0126 00:22:14.703028 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/857dd7a1-3d35-47a1-b2fa-5bcee0265262-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzl9z\" (UID: \"857dd7a1-3d35-47a1-b2fa-5bcee0265262\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzl9z" Jan 26 00:22:14 crc kubenswrapper[5110]: I0126 00:22:14.703539 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pfc26\" (UniqueName: \"kubernetes.io/projected/857dd7a1-3d35-47a1-b2fa-5bcee0265262-kube-api-access-pfc26\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzl9z\" (UID: \"857dd7a1-3d35-47a1-b2fa-5bcee0265262\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzl9z" Jan 26 00:22:14 crc kubenswrapper[5110]: I0126 00:22:14.703605 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/857dd7a1-3d35-47a1-b2fa-5bcee0265262-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzl9z\" (UID: \"857dd7a1-3d35-47a1-b2fa-5bcee0265262\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzl9z" Jan 26 00:22:14 crc kubenswrapper[5110]: I0126 00:22:14.703805 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/857dd7a1-3d35-47a1-b2fa-5bcee0265262-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzl9z\" (UID: \"857dd7a1-3d35-47a1-b2fa-5bcee0265262\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzl9z" Jan 26 00:22:14 crc kubenswrapper[5110]: I0126 00:22:14.704087 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/857dd7a1-3d35-47a1-b2fa-5bcee0265262-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzl9z\" (UID: \"857dd7a1-3d35-47a1-b2fa-5bcee0265262\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzl9z" Jan 26 00:22:14 crc kubenswrapper[5110]: I0126 00:22:14.720737 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egjbtk" event={"ID":"d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc","Type":"ContainerDied","Data":"f1afe6b5c2ba464c58fcfc1303996364b141bc6cd78e5631b8bdb0fba477276a"} Jan 26 00:22:14 crc kubenswrapper[5110]: I0126 00:22:14.720815 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1afe6b5c2ba464c58fcfc1303996364b141bc6cd78e5631b8bdb0fba477276a" Jan 26 00:22:14 crc kubenswrapper[5110]: I0126 00:22:14.874705 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egjbtk" Jan 26 00:22:14 crc kubenswrapper[5110]: I0126 00:22:14.876114 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfc26\" (UniqueName: \"kubernetes.io/projected/857dd7a1-3d35-47a1-b2fa-5bcee0265262-kube-api-access-pfc26\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzl9z\" (UID: \"857dd7a1-3d35-47a1-b2fa-5bcee0265262\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzl9z" Jan 26 00:22:14 crc kubenswrapper[5110]: I0126 00:22:14.904688 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsdtk\" (UniqueName: \"kubernetes.io/projected/d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc-kube-api-access-lsdtk\") pod \"d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc\" (UID: \"d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc\") " Jan 26 00:22:14 crc kubenswrapper[5110]: I0126 00:22:14.904779 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc-util\") pod \"d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc\" (UID: \"d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc\") " Jan 26 00:22:14 crc kubenswrapper[5110]: I0126 00:22:14.904887 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc-bundle\") pod \"d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc\" (UID: \"d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc\") " Jan 26 00:22:14 crc kubenswrapper[5110]: I0126 00:22:14.906739 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc-bundle" (OuterVolumeSpecName: "bundle") pod "d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc" (UID: "d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:22:14 crc kubenswrapper[5110]: I0126 00:22:14.973034 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzl9z"] Jan 26 00:22:15 crc kubenswrapper[5110]: I0126 00:22:15.002086 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc-util" (OuterVolumeSpecName: "util") pod "d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc" (UID: "d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:22:15 crc kubenswrapper[5110]: I0126 00:22:15.004045 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc-kube-api-access-lsdtk" (OuterVolumeSpecName: "kube-api-access-lsdtk") pod "d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc" (UID: "d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc"). InnerVolumeSpecName "kube-api-access-lsdtk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:22:15 crc kubenswrapper[5110]: I0126 00:22:15.010493 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzl9z" Jan 26 00:22:15 crc kubenswrapper[5110]: I0126 00:22:15.012037 5110 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc-util\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:15 crc kubenswrapper[5110]: I0126 00:22:15.012154 5110 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:15 crc kubenswrapper[5110]: I0126 00:22:15.012268 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lsdtk\" (UniqueName: \"kubernetes.io/projected/d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc-kube-api-access-lsdtk\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:16 crc kubenswrapper[5110]: I0126 00:22:16.274012 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egjbtk" Jan 26 00:22:16 crc kubenswrapper[5110]: I0126 00:22:16.276099 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prld8" event={"ID":"b81e6f80-8f8a-4c31-a8aa-7f52a0e09541","Type":"ContainerStarted","Data":"bebb72dd87d82e8372986791668a36e18767fc6d6a92a3531780f351c92551c7"} Jan 26 00:22:16 crc kubenswrapper[5110]: I0126 00:22:16.539931 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fh4pfh" Jan 26 00:22:16 crc kubenswrapper[5110]: I0126 00:22:16.592328 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2711e85a-af93-43d8-8e4a-d6b92be4f574-bundle\") pod \"2711e85a-af93-43d8-8e4a-d6b92be4f574\" (UID: \"2711e85a-af93-43d8-8e4a-d6b92be4f574\") " Jan 26 00:22:16 crc kubenswrapper[5110]: I0126 00:22:16.592458 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2711e85a-af93-43d8-8e4a-d6b92be4f574-util\") pod \"2711e85a-af93-43d8-8e4a-d6b92be4f574\" (UID: \"2711e85a-af93-43d8-8e4a-d6b92be4f574\") " Jan 26 00:22:16 crc kubenswrapper[5110]: I0126 00:22:16.592499 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqrsh\" (UniqueName: \"kubernetes.io/projected/2711e85a-af93-43d8-8e4a-d6b92be4f574-kube-api-access-wqrsh\") pod \"2711e85a-af93-43d8-8e4a-d6b92be4f574\" (UID: \"2711e85a-af93-43d8-8e4a-d6b92be4f574\") " Jan 26 00:22:16 crc kubenswrapper[5110]: I0126 00:22:16.605189 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2711e85a-af93-43d8-8e4a-d6b92be4f574-kube-api-access-wqrsh" (OuterVolumeSpecName: "kube-api-access-wqrsh") pod "2711e85a-af93-43d8-8e4a-d6b92be4f574" (UID: "2711e85a-af93-43d8-8e4a-d6b92be4f574"). InnerVolumeSpecName "kube-api-access-wqrsh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:22:16 crc kubenswrapper[5110]: I0126 00:22:16.615183 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2711e85a-af93-43d8-8e4a-d6b92be4f574-util" (OuterVolumeSpecName: "util") pod "2711e85a-af93-43d8-8e4a-d6b92be4f574" (UID: "2711e85a-af93-43d8-8e4a-d6b92be4f574"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:22:16 crc kubenswrapper[5110]: I0126 00:22:16.681371 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzl9z"] Jan 26 00:22:16 crc kubenswrapper[5110]: I0126 00:22:16.693953 5110 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2711e85a-af93-43d8-8e4a-d6b92be4f574-util\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:16 crc kubenswrapper[5110]: I0126 00:22:16.693996 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wqrsh\" (UniqueName: \"kubernetes.io/projected/2711e85a-af93-43d8-8e4a-d6b92be4f574-kube-api-access-wqrsh\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:16 crc kubenswrapper[5110]: I0126 00:22:16.726062 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2711e85a-af93-43d8-8e4a-d6b92be4f574-bundle" (OuterVolumeSpecName: "bundle") pod "2711e85a-af93-43d8-8e4a-d6b92be4f574" (UID: "2711e85a-af93-43d8-8e4a-d6b92be4f574"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:22:16 crc kubenswrapper[5110]: I0126 00:22:16.794668 5110 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2711e85a-af93-43d8-8e4a-d6b92be4f574-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:17 crc kubenswrapper[5110]: I0126 00:22:17.281573 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fh4pfh" event={"ID":"2711e85a-af93-43d8-8e4a-d6b92be4f574","Type":"ContainerDied","Data":"a8bd0f526a47e6e7c564ca981ad6d6a43ceeb81957a770f5ded8f54aee053b76"} Jan 26 00:22:17 crc kubenswrapper[5110]: I0126 00:22:17.281977 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8bd0f526a47e6e7c564ca981ad6d6a43ceeb81957a770f5ded8f54aee053b76" Jan 26 00:22:17 crc kubenswrapper[5110]: I0126 00:22:17.282090 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fh4pfh" Jan 26 00:22:17 crc kubenswrapper[5110]: I0126 00:22:17.287389 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzl9z" event={"ID":"857dd7a1-3d35-47a1-b2fa-5bcee0265262","Type":"ContainerStarted","Data":"1e7f4b88ec089e27aa5c2864e25c1d38980d7584a407b38790484026a9fa2b95"} Jan 26 00:22:18 crc kubenswrapper[5110]: I0126 00:22:18.294747 5110 generic.go:358] "Generic (PLEG): container finished" podID="b81e6f80-8f8a-4c31-a8aa-7f52a0e09541" containerID="bebb72dd87d82e8372986791668a36e18767fc6d6a92a3531780f351c92551c7" exitCode=0 Jan 26 00:22:18 crc kubenswrapper[5110]: I0126 00:22:18.294857 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prld8" event={"ID":"b81e6f80-8f8a-4c31-a8aa-7f52a0e09541","Type":"ContainerDied","Data":"bebb72dd87d82e8372986791668a36e18767fc6d6a92a3531780f351c92551c7"} Jan 26 00:22:18 crc kubenswrapper[5110]: I0126 00:22:18.297907 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzl9z" event={"ID":"857dd7a1-3d35-47a1-b2fa-5bcee0265262","Type":"ContainerStarted","Data":"d850b10d1fe68b807cc454a4d594b228a1b2ccea8227a382c2ad1249f2faee2c"} Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.220894 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-hjwfj"] Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.221762 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc" containerName="util" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.221775 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc" containerName="util" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.221785 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc" containerName="extract" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.221805 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc" containerName="extract" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.221819 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2711e85a-af93-43d8-8e4a-d6b92be4f574" containerName="util" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.221825 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="2711e85a-af93-43d8-8e4a-d6b92be4f574" containerName="util" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.221837 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2711e85a-af93-43d8-8e4a-d6b92be4f574" containerName="pull" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.221843 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="2711e85a-af93-43d8-8e4a-d6b92be4f574" containerName="pull" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.221850 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2711e85a-af93-43d8-8e4a-d6b92be4f574" containerName="extract" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.221856 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="2711e85a-af93-43d8-8e4a-d6b92be4f574" containerName="extract" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.221866 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc" containerName="pull" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.221871 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc" containerName="pull" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.221968 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="2711e85a-af93-43d8-8e4a-d6b92be4f574" containerName="extract" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.221977 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc" containerName="extract" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.296240 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-hjwfj"] Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.296287 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-898766f4b-5sc47"] Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.300814 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-898766f4b-7m2dg"] Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.302046 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-898766f4b-5sc47" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.302058 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-hjwfj" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.304525 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-898766f4b-5sc47"] Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.304673 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-898766f4b-7m2dg" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.305932 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-pfsqb\"" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.308761 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.313260 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-spckv\"" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.314568 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prld8" event={"ID":"b81e6f80-8f8a-4c31-a8aa-7f52a0e09541","Type":"ContainerStarted","Data":"8304b9b2f3a2c944434689fa22e6179b11aa03891644a5ffd7d6b6ffdda9726e"} Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.316715 5110 generic.go:358] "Generic (PLEG): container finished" podID="857dd7a1-3d35-47a1-b2fa-5bcee0265262" containerID="d850b10d1fe68b807cc454a4d594b228a1b2ccea8227a382c2ad1249f2faee2c" exitCode=0 Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.316844 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.317079 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzl9z" event={"ID":"857dd7a1-3d35-47a1-b2fa-5bcee0265262","Type":"ContainerDied","Data":"d850b10d1fe68b807cc454a4d594b228a1b2ccea8227a382c2ad1249f2faee2c"} Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.317587 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.332417 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-898766f4b-7m2dg"] Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.407599 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-prld8" podStartSLOduration=7.543599311 podStartE2EDuration="8.407583218s" podCreationTimestamp="2026-01-26 00:22:11 +0000 UTC" firstStartedPulling="2026-01-26 00:22:13.709234805 +0000 UTC m=+790.938133414" lastFinishedPulling="2026-01-26 00:22:14.573218712 +0000 UTC m=+791.802117321" observedRunningTime="2026-01-26 00:22:19.403702096 +0000 UTC m=+796.632600705" watchObservedRunningTime="2026-01-26 00:22:19.407583218 +0000 UTC m=+796.636481827" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.665642 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/85376b96-d97e-4b5e-9bb0-9a931610c0ec-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-898766f4b-5sc47\" (UID: \"85376b96-d97e-4b5e-9bb0-9a931610c0ec\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-898766f4b-5sc47" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.665698 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7cc0b3a9-bae1-46d7-974b-da9bd9c524e4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-898766f4b-7m2dg\" (UID: \"7cc0b3a9-bae1-46d7-974b-da9bd9c524e4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-898766f4b-7m2dg" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.665727 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/85376b96-d97e-4b5e-9bb0-9a931610c0ec-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-898766f4b-5sc47\" (UID: \"85376b96-d97e-4b5e-9bb0-9a931610c0ec\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-898766f4b-5sc47" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.665838 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7cc0b3a9-bae1-46d7-974b-da9bd9c524e4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-898766f4b-7m2dg\" (UID: \"7cc0b3a9-bae1-46d7-974b-da9bd9c524e4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-898766f4b-7m2dg" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.665868 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlb9l\" (UniqueName: \"kubernetes.io/projected/47b88cf6-8ce3-4593-b450-4a0b6ac95908-kube-api-access-mlb9l\") pod \"obo-prometheus-operator-9bc85b4bf-hjwfj\" (UID: \"47b88cf6-8ce3-4593-b450-4a0b6ac95908\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-hjwfj" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.675587 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-85c68dddb-jr7zr"] Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.683606 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-jr7zr" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.687539 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-gkt62\"" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.687847 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.692426 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-jr7zr"] Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.699567 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-5jr67"] Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.703695 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-5jr67" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.706736 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-g2g7h\"" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.715291 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-5jr67"] Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.767315 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/85376b96-d97e-4b5e-9bb0-9a931610c0ec-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-898766f4b-5sc47\" (UID: \"85376b96-d97e-4b5e-9bb0-9a931610c0ec\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-898766f4b-5sc47" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.767369 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7cc0b3a9-bae1-46d7-974b-da9bd9c524e4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-898766f4b-7m2dg\" (UID: \"7cc0b3a9-bae1-46d7-974b-da9bd9c524e4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-898766f4b-7m2dg" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.767404 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/85376b96-d97e-4b5e-9bb0-9a931610c0ec-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-898766f4b-5sc47\" (UID: \"85376b96-d97e-4b5e-9bb0-9a931610c0ec\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-898766f4b-5sc47" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.767447 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnckl\" (UniqueName: \"kubernetes.io/projected/e5dbd874-cd23-4d81-92b1-7bc9c9109e2c-kube-api-access-gnckl\") pod \"perses-operator-669c9f96b5-5jr67\" (UID: \"e5dbd874-cd23-4d81-92b1-7bc9c9109e2c\") " pod="openshift-operators/perses-operator-669c9f96b5-5jr67" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.767491 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7cc0b3a9-bae1-46d7-974b-da9bd9c524e4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-898766f4b-7m2dg\" (UID: \"7cc0b3a9-bae1-46d7-974b-da9bd9c524e4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-898766f4b-7m2dg" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.767524 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mlb9l\" (UniqueName: \"kubernetes.io/projected/47b88cf6-8ce3-4593-b450-4a0b6ac95908-kube-api-access-mlb9l\") pod \"obo-prometheus-operator-9bc85b4bf-hjwfj\" (UID: \"47b88cf6-8ce3-4593-b450-4a0b6ac95908\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-hjwfj" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.768034 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/18496abe-bb01-4ce4-a79c-35bc522ec58d-observability-operator-tls\") pod \"observability-operator-85c68dddb-jr7zr\" (UID: \"18496abe-bb01-4ce4-a79c-35bc522ec58d\") " pod="openshift-operators/observability-operator-85c68dddb-jr7zr" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.768103 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqz8x\" (UniqueName: \"kubernetes.io/projected/18496abe-bb01-4ce4-a79c-35bc522ec58d-kube-api-access-lqz8x\") pod \"observability-operator-85c68dddb-jr7zr\" (UID: \"18496abe-bb01-4ce4-a79c-35bc522ec58d\") " pod="openshift-operators/observability-operator-85c68dddb-jr7zr" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.768218 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/e5dbd874-cd23-4d81-92b1-7bc9c9109e2c-openshift-service-ca\") pod \"perses-operator-669c9f96b5-5jr67\" (UID: \"e5dbd874-cd23-4d81-92b1-7bc9c9109e2c\") " pod="openshift-operators/perses-operator-669c9f96b5-5jr67" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.777606 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7cc0b3a9-bae1-46d7-974b-da9bd9c524e4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-898766f4b-7m2dg\" (UID: \"7cc0b3a9-bae1-46d7-974b-da9bd9c524e4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-898766f4b-7m2dg" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.777669 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/85376b96-d97e-4b5e-9bb0-9a931610c0ec-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-898766f4b-5sc47\" (UID: \"85376b96-d97e-4b5e-9bb0-9a931610c0ec\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-898766f4b-5sc47" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.778419 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/85376b96-d97e-4b5e-9bb0-9a931610c0ec-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-898766f4b-5sc47\" (UID: \"85376b96-d97e-4b5e-9bb0-9a931610c0ec\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-898766f4b-5sc47" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.794708 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlb9l\" (UniqueName: \"kubernetes.io/projected/47b88cf6-8ce3-4593-b450-4a0b6ac95908-kube-api-access-mlb9l\") pod \"obo-prometheus-operator-9bc85b4bf-hjwfj\" (UID: \"47b88cf6-8ce3-4593-b450-4a0b6ac95908\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-hjwfj" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.798719 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7cc0b3a9-bae1-46d7-974b-da9bd9c524e4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-898766f4b-7m2dg\" (UID: \"7cc0b3a9-bae1-46d7-974b-da9bd9c524e4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-898766f4b-7m2dg" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.869812 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/18496abe-bb01-4ce4-a79c-35bc522ec58d-observability-operator-tls\") pod \"observability-operator-85c68dddb-jr7zr\" (UID: \"18496abe-bb01-4ce4-a79c-35bc522ec58d\") " pod="openshift-operators/observability-operator-85c68dddb-jr7zr" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.870222 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lqz8x\" (UniqueName: \"kubernetes.io/projected/18496abe-bb01-4ce4-a79c-35bc522ec58d-kube-api-access-lqz8x\") pod \"observability-operator-85c68dddb-jr7zr\" (UID: \"18496abe-bb01-4ce4-a79c-35bc522ec58d\") " pod="openshift-operators/observability-operator-85c68dddb-jr7zr" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.870388 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/e5dbd874-cd23-4d81-92b1-7bc9c9109e2c-openshift-service-ca\") pod \"perses-operator-669c9f96b5-5jr67\" (UID: \"e5dbd874-cd23-4d81-92b1-7bc9c9109e2c\") " pod="openshift-operators/perses-operator-669c9f96b5-5jr67" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.870601 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gnckl\" (UniqueName: \"kubernetes.io/projected/e5dbd874-cd23-4d81-92b1-7bc9c9109e2c-kube-api-access-gnckl\") pod \"perses-operator-669c9f96b5-5jr67\" (UID: \"e5dbd874-cd23-4d81-92b1-7bc9c9109e2c\") " pod="openshift-operators/perses-operator-669c9f96b5-5jr67" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.871505 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/e5dbd874-cd23-4d81-92b1-7bc9c9109e2c-openshift-service-ca\") pod \"perses-operator-669c9f96b5-5jr67\" (UID: \"e5dbd874-cd23-4d81-92b1-7bc9c9109e2c\") " pod="openshift-operators/perses-operator-669c9f96b5-5jr67" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.873923 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/18496abe-bb01-4ce4-a79c-35bc522ec58d-observability-operator-tls\") pod \"observability-operator-85c68dddb-jr7zr\" (UID: \"18496abe-bb01-4ce4-a79c-35bc522ec58d\") " pod="openshift-operators/observability-operator-85c68dddb-jr7zr" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.902743 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnckl\" (UniqueName: \"kubernetes.io/projected/e5dbd874-cd23-4d81-92b1-7bc9c9109e2c-kube-api-access-gnckl\") pod \"perses-operator-669c9f96b5-5jr67\" (UID: \"e5dbd874-cd23-4d81-92b1-7bc9c9109e2c\") " pod="openshift-operators/perses-operator-669c9f96b5-5jr67" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.902788 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqz8x\" (UniqueName: \"kubernetes.io/projected/18496abe-bb01-4ce4-a79c-35bc522ec58d-kube-api-access-lqz8x\") pod \"observability-operator-85c68dddb-jr7zr\" (UID: \"18496abe-bb01-4ce4-a79c-35bc522ec58d\") " pod="openshift-operators/observability-operator-85c68dddb-jr7zr" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.949174 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-898766f4b-5sc47" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.957758 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-hjwfj" Jan 26 00:22:19 crc kubenswrapper[5110]: I0126 00:22:19.974879 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-898766f4b-7m2dg" Jan 26 00:22:20 crc kubenswrapper[5110]: I0126 00:22:20.006042 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-jr7zr" Jan 26 00:22:20 crc kubenswrapper[5110]: I0126 00:22:20.028972 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-5jr67" Jan 26 00:22:21 crc kubenswrapper[5110]: I0126 00:22:21.675921 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-hjwfj"] Jan 26 00:22:21 crc kubenswrapper[5110]: I0126 00:22:21.719865 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-prld8" Jan 26 00:22:21 crc kubenswrapper[5110]: I0126 00:22:21.719919 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-prld8" Jan 26 00:22:21 crc kubenswrapper[5110]: I0126 00:22:21.758884 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-jr7zr"] Jan 26 00:22:21 crc kubenswrapper[5110]: W0126 00:22:21.802839 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18496abe_bb01_4ce4_a79c_35bc522ec58d.slice/crio-d318e7592deac99fb302a6470c77ad263d93cd44b8210d2bcbb156f6db1a4cef WatchSource:0}: Error finding container d318e7592deac99fb302a6470c77ad263d93cd44b8210d2bcbb156f6db1a4cef: Status 404 returned error can't find the container with id d318e7592deac99fb302a6470c77ad263d93cd44b8210d2bcbb156f6db1a4cef Jan 26 00:22:21 crc kubenswrapper[5110]: I0126 00:22:21.913903 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-898766f4b-7m2dg"] Jan 26 00:22:22 crc kubenswrapper[5110]: I0126 00:22:22.024276 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-898766f4b-5sc47"] Jan 26 00:22:22 crc kubenswrapper[5110]: I0126 00:22:22.109063 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bkjxb" Jan 26 00:22:22 crc kubenswrapper[5110]: I0126 00:22:22.134305 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-5jr67"] Jan 26 00:22:22 crc kubenswrapper[5110]: I0126 00:22:22.205194 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bkjxb" Jan 26 00:22:22 crc kubenswrapper[5110]: I0126 00:22:22.356972 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-898766f4b-7m2dg" event={"ID":"7cc0b3a9-bae1-46d7-974b-da9bd9c524e4","Type":"ContainerStarted","Data":"3f84ec625517314c2fe7ffd6242f7dc831e49626fe95e57bff19c0b357301425"} Jan 26 00:22:22 crc kubenswrapper[5110]: I0126 00:22:22.358250 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-898766f4b-5sc47" event={"ID":"85376b96-d97e-4b5e-9bb0-9a931610c0ec","Type":"ContainerStarted","Data":"a37e8032d99c545169a4d6fb5f102496800b9d43bb4b54a5865ad03416b5663d"} Jan 26 00:22:22 crc kubenswrapper[5110]: I0126 00:22:22.359321 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-5jr67" event={"ID":"e5dbd874-cd23-4d81-92b1-7bc9c9109e2c","Type":"ContainerStarted","Data":"ddbc29b2dd286eb3dbe2d672904b6e6f43adfe5b592498ed07b7919b3ab2991d"} Jan 26 00:22:22 crc kubenswrapper[5110]: I0126 00:22:22.360451 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-hjwfj" event={"ID":"47b88cf6-8ce3-4593-b450-4a0b6ac95908","Type":"ContainerStarted","Data":"aca5a71d8de04bdd8fdd40e2c52937a697078e6e041d41c37aeb33c6976a27de"} Jan 26 00:22:22 crc kubenswrapper[5110]: I0126 00:22:22.361534 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-jr7zr" event={"ID":"18496abe-bb01-4ce4-a79c-35bc522ec58d","Type":"ContainerStarted","Data":"d318e7592deac99fb302a6470c77ad263d93cd44b8210d2bcbb156f6db1a4cef"} Jan 26 00:22:22 crc kubenswrapper[5110]: I0126 00:22:22.811447 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-prld8" podUID="b81e6f80-8f8a-4c31-a8aa-7f52a0e09541" containerName="registry-server" probeResult="failure" output=< Jan 26 00:22:22 crc kubenswrapper[5110]: timeout: failed to connect service ":50051" within 1s Jan 26 00:22:22 crc kubenswrapper[5110]: > Jan 26 00:22:23 crc kubenswrapper[5110]: I0126 00:22:23.079545 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-59b9c7b5d-xkl57"] Jan 26 00:22:23 crc kubenswrapper[5110]: I0126 00:22:23.155169 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-59b9c7b5d-xkl57"] Jan 26 00:22:23 crc kubenswrapper[5110]: I0126 00:22:23.155449 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-59b9c7b5d-xkl57" Jan 26 00:22:23 crc kubenswrapper[5110]: I0126 00:22:23.157991 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Jan 26 00:22:23 crc kubenswrapper[5110]: I0126 00:22:23.159226 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Jan 26 00:22:23 crc kubenswrapper[5110]: I0126 00:22:23.165694 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Jan 26 00:22:23 crc kubenswrapper[5110]: I0126 00:22:23.165867 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-pth56\"" Jan 26 00:22:23 crc kubenswrapper[5110]: I0126 00:22:23.297632 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dae80e23-f54a-4949-bb26-29e9afccc40d-apiservice-cert\") pod \"elastic-operator-59b9c7b5d-xkl57\" (UID: \"dae80e23-f54a-4949-bb26-29e9afccc40d\") " pod="service-telemetry/elastic-operator-59b9c7b5d-xkl57" Jan 26 00:22:23 crc kubenswrapper[5110]: I0126 00:22:23.297704 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dae80e23-f54a-4949-bb26-29e9afccc40d-webhook-cert\") pod \"elastic-operator-59b9c7b5d-xkl57\" (UID: \"dae80e23-f54a-4949-bb26-29e9afccc40d\") " pod="service-telemetry/elastic-operator-59b9c7b5d-xkl57" Jan 26 00:22:23 crc kubenswrapper[5110]: I0126 00:22:23.297756 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b987b\" (UniqueName: \"kubernetes.io/projected/dae80e23-f54a-4949-bb26-29e9afccc40d-kube-api-access-b987b\") pod \"elastic-operator-59b9c7b5d-xkl57\" (UID: \"dae80e23-f54a-4949-bb26-29e9afccc40d\") " pod="service-telemetry/elastic-operator-59b9c7b5d-xkl57" Jan 26 00:22:23 crc kubenswrapper[5110]: I0126 00:22:23.408697 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dae80e23-f54a-4949-bb26-29e9afccc40d-apiservice-cert\") pod \"elastic-operator-59b9c7b5d-xkl57\" (UID: \"dae80e23-f54a-4949-bb26-29e9afccc40d\") " pod="service-telemetry/elastic-operator-59b9c7b5d-xkl57" Jan 26 00:22:23 crc kubenswrapper[5110]: I0126 00:22:23.408782 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dae80e23-f54a-4949-bb26-29e9afccc40d-webhook-cert\") pod \"elastic-operator-59b9c7b5d-xkl57\" (UID: \"dae80e23-f54a-4949-bb26-29e9afccc40d\") " pod="service-telemetry/elastic-operator-59b9c7b5d-xkl57" Jan 26 00:22:23 crc kubenswrapper[5110]: I0126 00:22:23.408833 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b987b\" (UniqueName: \"kubernetes.io/projected/dae80e23-f54a-4949-bb26-29e9afccc40d-kube-api-access-b987b\") pod \"elastic-operator-59b9c7b5d-xkl57\" (UID: \"dae80e23-f54a-4949-bb26-29e9afccc40d\") " pod="service-telemetry/elastic-operator-59b9c7b5d-xkl57" Jan 26 00:22:23 crc kubenswrapper[5110]: I0126 00:22:23.423503 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dae80e23-f54a-4949-bb26-29e9afccc40d-webhook-cert\") pod \"elastic-operator-59b9c7b5d-xkl57\" (UID: \"dae80e23-f54a-4949-bb26-29e9afccc40d\") " pod="service-telemetry/elastic-operator-59b9c7b5d-xkl57" Jan 26 00:22:23 crc kubenswrapper[5110]: I0126 00:22:23.429403 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dae80e23-f54a-4949-bb26-29e9afccc40d-apiservice-cert\") pod \"elastic-operator-59b9c7b5d-xkl57\" (UID: \"dae80e23-f54a-4949-bb26-29e9afccc40d\") " pod="service-telemetry/elastic-operator-59b9c7b5d-xkl57" Jan 26 00:22:23 crc kubenswrapper[5110]: I0126 00:22:23.435514 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b987b\" (UniqueName: \"kubernetes.io/projected/dae80e23-f54a-4949-bb26-29e9afccc40d-kube-api-access-b987b\") pod \"elastic-operator-59b9c7b5d-xkl57\" (UID: \"dae80e23-f54a-4949-bb26-29e9afccc40d\") " pod="service-telemetry/elastic-operator-59b9c7b5d-xkl57" Jan 26 00:22:23 crc kubenswrapper[5110]: I0126 00:22:23.486852 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-59b9c7b5d-xkl57" Jan 26 00:22:24 crc kubenswrapper[5110]: I0126 00:22:24.183413 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-59b9c7b5d-xkl57"] Jan 26 00:22:24 crc kubenswrapper[5110]: W0126 00:22:24.213028 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddae80e23_f54a_4949_bb26_29e9afccc40d.slice/crio-398ee58ea5dd2774164a94a935885602ee6114b425aafc123e512449db8b1454 WatchSource:0}: Error finding container 398ee58ea5dd2774164a94a935885602ee6114b425aafc123e512449db8b1454: Status 404 returned error can't find the container with id 398ee58ea5dd2774164a94a935885602ee6114b425aafc123e512449db8b1454 Jan 26 00:22:24 crc kubenswrapper[5110]: I0126 00:22:24.411438 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-59b9c7b5d-xkl57" event={"ID":"dae80e23-f54a-4949-bb26-29e9afccc40d","Type":"ContainerStarted","Data":"398ee58ea5dd2774164a94a935885602ee6114b425aafc123e512449db8b1454"} Jan 26 00:22:24 crc kubenswrapper[5110]: I0126 00:22:24.507301 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-gmzn5"] Jan 26 00:22:24 crc kubenswrapper[5110]: I0126 00:22:24.584728 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-gmzn5"] Jan 26 00:22:24 crc kubenswrapper[5110]: I0126 00:22:24.584888 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-gmzn5" Jan 26 00:22:24 crc kubenswrapper[5110]: I0126 00:22:24.587254 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"interconnect-operator-dockercfg-w6gt8\"" Jan 26 00:22:24 crc kubenswrapper[5110]: I0126 00:22:24.770005 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbl2q\" (UniqueName: \"kubernetes.io/projected/1b06d07c-855b-463f-aba7-16ab95dba6bc-kube-api-access-lbl2q\") pod \"interconnect-operator-78b9bd8798-gmzn5\" (UID: \"1b06d07c-855b-463f-aba7-16ab95dba6bc\") " pod="service-telemetry/interconnect-operator-78b9bd8798-gmzn5" Jan 26 00:22:24 crc kubenswrapper[5110]: I0126 00:22:24.871414 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lbl2q\" (UniqueName: \"kubernetes.io/projected/1b06d07c-855b-463f-aba7-16ab95dba6bc-kube-api-access-lbl2q\") pod \"interconnect-operator-78b9bd8798-gmzn5\" (UID: \"1b06d07c-855b-463f-aba7-16ab95dba6bc\") " pod="service-telemetry/interconnect-operator-78b9bd8798-gmzn5" Jan 26 00:22:24 crc kubenswrapper[5110]: I0126 00:22:24.897143 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbl2q\" (UniqueName: \"kubernetes.io/projected/1b06d07c-855b-463f-aba7-16ab95dba6bc-kube-api-access-lbl2q\") pod \"interconnect-operator-78b9bd8798-gmzn5\" (UID: \"1b06d07c-855b-463f-aba7-16ab95dba6bc\") " pod="service-telemetry/interconnect-operator-78b9bd8798-gmzn5" Jan 26 00:22:24 crc kubenswrapper[5110]: I0126 00:22:24.969838 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-gmzn5" Jan 26 00:22:26 crc kubenswrapper[5110]: I0126 00:22:26.814937 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bkjxb"] Jan 26 00:22:26 crc kubenswrapper[5110]: I0126 00:22:26.815909 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bkjxb" podUID="582e457d-8bb7-4c07-96f7-bbc77fce72cb" containerName="registry-server" containerID="cri-o://11d6e5492d73efd37b74ebb12531471d0f8f3611f651fa08270b5fb1b5809891" gracePeriod=2 Jan 26 00:22:27 crc kubenswrapper[5110]: I0126 00:22:27.474832 5110 generic.go:358] "Generic (PLEG): container finished" podID="582e457d-8bb7-4c07-96f7-bbc77fce72cb" containerID="11d6e5492d73efd37b74ebb12531471d0f8f3611f651fa08270b5fb1b5809891" exitCode=0 Jan 26 00:22:27 crc kubenswrapper[5110]: I0126 00:22:27.474911 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkjxb" event={"ID":"582e457d-8bb7-4c07-96f7-bbc77fce72cb","Type":"ContainerDied","Data":"11d6e5492d73efd37b74ebb12531471d0f8f3611f651fa08270b5fb1b5809891"} Jan 26 00:22:31 crc kubenswrapper[5110]: I0126 00:22:31.777204 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-prld8" Jan 26 00:22:31 crc kubenswrapper[5110]: I0126 00:22:31.823562 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-prld8" Jan 26 00:22:32 crc kubenswrapper[5110]: E0126 00:22:32.113184 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 11d6e5492d73efd37b74ebb12531471d0f8f3611f651fa08270b5fb1b5809891 is running failed: container process not found" containerID="11d6e5492d73efd37b74ebb12531471d0f8f3611f651fa08270b5fb1b5809891" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:22:32 crc kubenswrapper[5110]: E0126 00:22:32.118705 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 11d6e5492d73efd37b74ebb12531471d0f8f3611f651fa08270b5fb1b5809891 is running failed: container process not found" containerID="11d6e5492d73efd37b74ebb12531471d0f8f3611f651fa08270b5fb1b5809891" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:22:32 crc kubenswrapper[5110]: E0126 00:22:32.119157 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 11d6e5492d73efd37b74ebb12531471d0f8f3611f651fa08270b5fb1b5809891 is running failed: container process not found" containerID="11d6e5492d73efd37b74ebb12531471d0f8f3611f651fa08270b5fb1b5809891" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:22:32 crc kubenswrapper[5110]: E0126 00:22:32.119246 5110 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 11d6e5492d73efd37b74ebb12531471d0f8f3611f651fa08270b5fb1b5809891 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-bkjxb" podUID="582e457d-8bb7-4c07-96f7-bbc77fce72cb" containerName="registry-server" probeResult="unknown" Jan 26 00:22:36 crc kubenswrapper[5110]: I0126 00:22:36.482471 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-prld8"] Jan 26 00:22:36 crc kubenswrapper[5110]: I0126 00:22:36.483464 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-prld8" podUID="b81e6f80-8f8a-4c31-a8aa-7f52a0e09541" containerName="registry-server" containerID="cri-o://8304b9b2f3a2c944434689fa22e6179b11aa03891644a5ffd7d6b6ffdda9726e" gracePeriod=2 Jan 26 00:22:37 crc kubenswrapper[5110]: I0126 00:22:37.019614 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bkjxb" Jan 26 00:22:37 crc kubenswrapper[5110]: I0126 00:22:37.057556 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v26wk\" (UniqueName: \"kubernetes.io/projected/582e457d-8bb7-4c07-96f7-bbc77fce72cb-kube-api-access-v26wk\") pod \"582e457d-8bb7-4c07-96f7-bbc77fce72cb\" (UID: \"582e457d-8bb7-4c07-96f7-bbc77fce72cb\") " Jan 26 00:22:37 crc kubenswrapper[5110]: I0126 00:22:37.057688 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/582e457d-8bb7-4c07-96f7-bbc77fce72cb-utilities\") pod \"582e457d-8bb7-4c07-96f7-bbc77fce72cb\" (UID: \"582e457d-8bb7-4c07-96f7-bbc77fce72cb\") " Jan 26 00:22:37 crc kubenswrapper[5110]: I0126 00:22:37.057751 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/582e457d-8bb7-4c07-96f7-bbc77fce72cb-catalog-content\") pod \"582e457d-8bb7-4c07-96f7-bbc77fce72cb\" (UID: \"582e457d-8bb7-4c07-96f7-bbc77fce72cb\") " Jan 26 00:22:37 crc kubenswrapper[5110]: I0126 00:22:37.062921 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/582e457d-8bb7-4c07-96f7-bbc77fce72cb-utilities" (OuterVolumeSpecName: "utilities") pod "582e457d-8bb7-4c07-96f7-bbc77fce72cb" (UID: "582e457d-8bb7-4c07-96f7-bbc77fce72cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:22:37 crc kubenswrapper[5110]: I0126 00:22:37.067446 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/582e457d-8bb7-4c07-96f7-bbc77fce72cb-kube-api-access-v26wk" (OuterVolumeSpecName: "kube-api-access-v26wk") pod "582e457d-8bb7-4c07-96f7-bbc77fce72cb" (UID: "582e457d-8bb7-4c07-96f7-bbc77fce72cb"). InnerVolumeSpecName "kube-api-access-v26wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:22:37 crc kubenswrapper[5110]: I0126 00:22:37.161267 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v26wk\" (UniqueName: \"kubernetes.io/projected/582e457d-8bb7-4c07-96f7-bbc77fce72cb-kube-api-access-v26wk\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:37 crc kubenswrapper[5110]: I0126 00:22:37.161311 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/582e457d-8bb7-4c07-96f7-bbc77fce72cb-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:37 crc kubenswrapper[5110]: I0126 00:22:37.188339 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/582e457d-8bb7-4c07-96f7-bbc77fce72cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "582e457d-8bb7-4c07-96f7-bbc77fce72cb" (UID: "582e457d-8bb7-4c07-96f7-bbc77fce72cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:22:37 crc kubenswrapper[5110]: I0126 00:22:37.273566 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/582e457d-8bb7-4c07-96f7-bbc77fce72cb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:37 crc kubenswrapper[5110]: I0126 00:22:37.685734 5110 generic.go:358] "Generic (PLEG): container finished" podID="b81e6f80-8f8a-4c31-a8aa-7f52a0e09541" containerID="8304b9b2f3a2c944434689fa22e6179b11aa03891644a5ffd7d6b6ffdda9726e" exitCode=0 Jan 26 00:22:37 crc kubenswrapper[5110]: I0126 00:22:37.685976 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prld8" event={"ID":"b81e6f80-8f8a-4c31-a8aa-7f52a0e09541","Type":"ContainerDied","Data":"8304b9b2f3a2c944434689fa22e6179b11aa03891644a5ffd7d6b6ffdda9726e"} Jan 26 00:22:37 crc kubenswrapper[5110]: I0126 00:22:37.691226 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bkjxb" Jan 26 00:22:37 crc kubenswrapper[5110]: I0126 00:22:37.691209 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkjxb" event={"ID":"582e457d-8bb7-4c07-96f7-bbc77fce72cb","Type":"ContainerDied","Data":"f52c1acd1ef49b9c342a9af7f7236fde2f1b4eb7fc2e757ef4bd73acf591b25a"} Jan 26 00:22:37 crc kubenswrapper[5110]: I0126 00:22:37.691376 5110 scope.go:117] "RemoveContainer" containerID="11d6e5492d73efd37b74ebb12531471d0f8f3611f651fa08270b5fb1b5809891" Jan 26 00:22:37 crc kubenswrapper[5110]: I0126 00:22:37.725519 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bkjxb"] Jan 26 00:22:37 crc kubenswrapper[5110]: I0126 00:22:37.733912 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bkjxb"] Jan 26 00:22:39 crc kubenswrapper[5110]: I0126 00:22:39.326498 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="582e457d-8bb7-4c07-96f7-bbc77fce72cb" path="/var/lib/kubelet/pods/582e457d-8bb7-4c07-96f7-bbc77fce72cb/volumes" Jan 26 00:22:41 crc kubenswrapper[5110]: E0126 00:22:41.780668 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8304b9b2f3a2c944434689fa22e6179b11aa03891644a5ffd7d6b6ffdda9726e is running failed: container process not found" containerID="8304b9b2f3a2c944434689fa22e6179b11aa03891644a5ffd7d6b6ffdda9726e" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:22:41 crc kubenswrapper[5110]: E0126 00:22:41.782596 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8304b9b2f3a2c944434689fa22e6179b11aa03891644a5ffd7d6b6ffdda9726e is running failed: container process not found" containerID="8304b9b2f3a2c944434689fa22e6179b11aa03891644a5ffd7d6b6ffdda9726e" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:22:41 crc kubenswrapper[5110]: E0126 00:22:41.782890 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8304b9b2f3a2c944434689fa22e6179b11aa03891644a5ffd7d6b6ffdda9726e is running failed: container process not found" containerID="8304b9b2f3a2c944434689fa22e6179b11aa03891644a5ffd7d6b6ffdda9726e" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:22:41 crc kubenswrapper[5110]: E0126 00:22:41.782924 5110 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8304b9b2f3a2c944434689fa22e6179b11aa03891644a5ffd7d6b6ffdda9726e is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-prld8" podUID="b81e6f80-8f8a-4c31-a8aa-7f52a0e09541" containerName="registry-server" probeResult="unknown" Jan 26 00:22:44 crc kubenswrapper[5110]: I0126 00:22:44.746393 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prld8" Jan 26 00:22:44 crc kubenswrapper[5110]: I0126 00:22:44.800390 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdsdw\" (UniqueName: \"kubernetes.io/projected/b81e6f80-8f8a-4c31-a8aa-7f52a0e09541-kube-api-access-gdsdw\") pod \"b81e6f80-8f8a-4c31-a8aa-7f52a0e09541\" (UID: \"b81e6f80-8f8a-4c31-a8aa-7f52a0e09541\") " Jan 26 00:22:44 crc kubenswrapper[5110]: I0126 00:22:44.807042 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b81e6f80-8f8a-4c31-a8aa-7f52a0e09541-utilities\") pod \"b81e6f80-8f8a-4c31-a8aa-7f52a0e09541\" (UID: \"b81e6f80-8f8a-4c31-a8aa-7f52a0e09541\") " Jan 26 00:22:44 crc kubenswrapper[5110]: I0126 00:22:44.808712 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b81e6f80-8f8a-4c31-a8aa-7f52a0e09541-catalog-content\") pod \"b81e6f80-8f8a-4c31-a8aa-7f52a0e09541\" (UID: \"b81e6f80-8f8a-4c31-a8aa-7f52a0e09541\") " Jan 26 00:22:44 crc kubenswrapper[5110]: I0126 00:22:44.808783 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b81e6f80-8f8a-4c31-a8aa-7f52a0e09541-utilities" (OuterVolumeSpecName: "utilities") pod "b81e6f80-8f8a-4c31-a8aa-7f52a0e09541" (UID: "b81e6f80-8f8a-4c31-a8aa-7f52a0e09541"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:22:44 crc kubenswrapper[5110]: I0126 00:22:44.809222 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b81e6f80-8f8a-4c31-a8aa-7f52a0e09541-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:44 crc kubenswrapper[5110]: I0126 00:22:44.810956 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prld8" event={"ID":"b81e6f80-8f8a-4c31-a8aa-7f52a0e09541","Type":"ContainerDied","Data":"9ec6cbbb82cb529affbb4c40475e6485932d6d5cf57db7dbbf75ab8af361c2fc"} Jan 26 00:22:44 crc kubenswrapper[5110]: I0126 00:22:44.811237 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prld8" Jan 26 00:22:44 crc kubenswrapper[5110]: I0126 00:22:44.817926 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b81e6f80-8f8a-4c31-a8aa-7f52a0e09541-kube-api-access-gdsdw" (OuterVolumeSpecName: "kube-api-access-gdsdw") pod "b81e6f80-8f8a-4c31-a8aa-7f52a0e09541" (UID: "b81e6f80-8f8a-4c31-a8aa-7f52a0e09541"). InnerVolumeSpecName "kube-api-access-gdsdw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:22:44 crc kubenswrapper[5110]: I0126 00:22:44.836444 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b81e6f80-8f8a-4c31-a8aa-7f52a0e09541-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b81e6f80-8f8a-4c31-a8aa-7f52a0e09541" (UID: "b81e6f80-8f8a-4c31-a8aa-7f52a0e09541"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:22:44 crc kubenswrapper[5110]: I0126 00:22:44.910891 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b81e6f80-8f8a-4c31-a8aa-7f52a0e09541-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:44 crc kubenswrapper[5110]: I0126 00:22:44.910929 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gdsdw\" (UniqueName: \"kubernetes.io/projected/b81e6f80-8f8a-4c31-a8aa-7f52a0e09541-kube-api-access-gdsdw\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:45 crc kubenswrapper[5110]: I0126 00:22:45.142335 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-prld8"] Jan 26 00:22:45 crc kubenswrapper[5110]: I0126 00:22:45.148386 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-prld8"] Jan 26 00:22:45 crc kubenswrapper[5110]: I0126 00:22:45.324263 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b81e6f80-8f8a-4c31-a8aa-7f52a0e09541" path="/var/lib/kubelet/pods/b81e6f80-8f8a-4c31-a8aa-7f52a0e09541/volumes" Jan 26 00:22:46 crc kubenswrapper[5110]: I0126 00:22:46.005916 5110 scope.go:117] "RemoveContainer" containerID="d32f1d10fdcc1a158c14127f974071a5a4e2c0d177cbdaa9b52be9f550c7da45" Jan 26 00:22:46 crc kubenswrapper[5110]: I0126 00:22:46.098192 5110 scope.go:117] "RemoveContainer" containerID="f3d247990c87a7960a7610fdc40f9d46f1436160c0cefa501869d83fbc0c6f26" Jan 26 00:22:46 crc kubenswrapper[5110]: I0126 00:22:46.185158 5110 scope.go:117] "RemoveContainer" containerID="8304b9b2f3a2c944434689fa22e6179b11aa03891644a5ffd7d6b6ffdda9726e" Jan 26 00:22:46 crc kubenswrapper[5110]: I0126 00:22:46.204442 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-gmzn5"] Jan 26 00:22:46 crc kubenswrapper[5110]: I0126 00:22:46.225173 5110 scope.go:117] "RemoveContainer" containerID="bebb72dd87d82e8372986791668a36e18767fc6d6a92a3531780f351c92551c7" Jan 26 00:22:46 crc kubenswrapper[5110]: W0126 00:22:46.225228 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b06d07c_855b_463f_aba7_16ab95dba6bc.slice/crio-94eee95c2096f5eb5a667456fdf575820488837d65f149c55628f7ef41a31ade WatchSource:0}: Error finding container 94eee95c2096f5eb5a667456fdf575820488837d65f149c55628f7ef41a31ade: Status 404 returned error can't find the container with id 94eee95c2096f5eb5a667456fdf575820488837d65f149c55628f7ef41a31ade Jan 26 00:22:46 crc kubenswrapper[5110]: I0126 00:22:46.352501 5110 scope.go:117] "RemoveContainer" containerID="84f9b0ad682a55287ac9cd5bb62941f419db120b9f7ed10c92105b23e9700edb" Jan 26 00:22:46 crc kubenswrapper[5110]: I0126 00:22:46.826694 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-59b9c7b5d-xkl57" event={"ID":"dae80e23-f54a-4949-bb26-29e9afccc40d","Type":"ContainerStarted","Data":"9d4003b659454f66c881bafb23a5ad091c3a815c369f53681a405cfb20a6cf5e"} Jan 26 00:22:46 crc kubenswrapper[5110]: I0126 00:22:46.839035 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-898766f4b-7m2dg" event={"ID":"7cc0b3a9-bae1-46d7-974b-da9bd9c524e4","Type":"ContainerStarted","Data":"c1beb9c17c3d9c865e935ede1bdebc5dd08c828947aca55fc3b23409c9f01782"} Jan 26 00:22:46 crc kubenswrapper[5110]: I0126 00:22:46.842727 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-898766f4b-5sc47" event={"ID":"85376b96-d97e-4b5e-9bb0-9a931610c0ec","Type":"ContainerStarted","Data":"855447f6e4ce233a1222ad8c000cf9142ba12903212293e0a3f730cec7abe4d2"} Jan 26 00:22:46 crc kubenswrapper[5110]: I0126 00:22:46.852246 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-5jr67" event={"ID":"e5dbd874-cd23-4d81-92b1-7bc9c9109e2c","Type":"ContainerStarted","Data":"97c81f67fc07ae890f78fa0889e2ef37e8ecd7fa27d275d43c55fa761f87c83a"} Jan 26 00:22:46 crc kubenswrapper[5110]: I0126 00:22:46.853091 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-669c9f96b5-5jr67" Jan 26 00:22:46 crc kubenswrapper[5110]: I0126 00:22:46.855420 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-59b9c7b5d-xkl57" podStartSLOduration=1.973660162 podStartE2EDuration="23.855407388s" podCreationTimestamp="2026-01-26 00:22:23 +0000 UTC" firstStartedPulling="2026-01-26 00:22:24.220010776 +0000 UTC m=+801.448909385" lastFinishedPulling="2026-01-26 00:22:46.101758002 +0000 UTC m=+823.330656611" observedRunningTime="2026-01-26 00:22:46.849303753 +0000 UTC m=+824.078202372" watchObservedRunningTime="2026-01-26 00:22:46.855407388 +0000 UTC m=+824.084305997" Jan 26 00:22:46 crc kubenswrapper[5110]: I0126 00:22:46.863278 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-hjwfj" event={"ID":"47b88cf6-8ce3-4593-b450-4a0b6ac95908","Type":"ContainerStarted","Data":"630c6677254fb125345b303b9272c049ea901b0d6d3d83fc153f37c22b1c747e"} Jan 26 00:22:46 crc kubenswrapper[5110]: I0126 00:22:46.879078 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-898766f4b-5sc47" podStartSLOduration=3.842785449 podStartE2EDuration="27.879061266s" podCreationTimestamp="2026-01-26 00:22:19 +0000 UTC" firstStartedPulling="2026-01-26 00:22:22.065497165 +0000 UTC m=+799.294395774" lastFinishedPulling="2026-01-26 00:22:46.101772982 +0000 UTC m=+823.330671591" observedRunningTime="2026-01-26 00:22:46.878057667 +0000 UTC m=+824.106956296" watchObservedRunningTime="2026-01-26 00:22:46.879061266 +0000 UTC m=+824.107959875" Jan 26 00:22:46 crc kubenswrapper[5110]: I0126 00:22:46.902632 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-jr7zr" event={"ID":"18496abe-bb01-4ce4-a79c-35bc522ec58d","Type":"ContainerStarted","Data":"1fdc7de1f4d3b8062deabbc0f429b73b4de5b0c3b53e3b59b63c18acdbfdafe0"} Jan 26 00:22:46 crc kubenswrapper[5110]: I0126 00:22:46.903647 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-85c68dddb-jr7zr" Jan 26 00:22:46 crc kubenswrapper[5110]: I0126 00:22:46.907530 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-898766f4b-7m2dg" podStartSLOduration=4.048326295 podStartE2EDuration="27.90750798s" podCreationTimestamp="2026-01-26 00:22:19 +0000 UTC" firstStartedPulling="2026-01-26 00:22:22.062349745 +0000 UTC m=+799.291248354" lastFinishedPulling="2026-01-26 00:22:45.92153143 +0000 UTC m=+823.150430039" observedRunningTime="2026-01-26 00:22:46.903853356 +0000 UTC m=+824.132751965" watchObservedRunningTime="2026-01-26 00:22:46.90750798 +0000 UTC m=+824.136406599" Jan 26 00:22:46 crc kubenswrapper[5110]: I0126 00:22:46.926888 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-669c9f96b5-5jr67" podStartSLOduration=3.979622047 podStartE2EDuration="27.926870635s" podCreationTimestamp="2026-01-26 00:22:19 +0000 UTC" firstStartedPulling="2026-01-26 00:22:22.150408647 +0000 UTC m=+799.379307256" lastFinishedPulling="2026-01-26 00:22:46.097657235 +0000 UTC m=+823.326555844" observedRunningTime="2026-01-26 00:22:46.925460315 +0000 UTC m=+824.154358934" watchObservedRunningTime="2026-01-26 00:22:46.926870635 +0000 UTC m=+824.155769244" Jan 26 00:22:46 crc kubenswrapper[5110]: I0126 00:22:46.930743 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-85c68dddb-jr7zr" Jan 26 00:22:46 crc kubenswrapper[5110]: I0126 00:22:46.944130 5110 generic.go:358] "Generic (PLEG): container finished" podID="857dd7a1-3d35-47a1-b2fa-5bcee0265262" containerID="9ce95eba118ae5445eab68777855afbf5b0111e850a3924f5410724e5c3302e5" exitCode=0 Jan 26 00:22:46 crc kubenswrapper[5110]: I0126 00:22:46.944272 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzl9z" event={"ID":"857dd7a1-3d35-47a1-b2fa-5bcee0265262","Type":"ContainerDied","Data":"9ce95eba118ae5445eab68777855afbf5b0111e850a3924f5410724e5c3302e5"} Jan 26 00:22:46 crc kubenswrapper[5110]: I0126 00:22:46.952300 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-gmzn5" event={"ID":"1b06d07c-855b-463f-aba7-16ab95dba6bc","Type":"ContainerStarted","Data":"94eee95c2096f5eb5a667456fdf575820488837d65f149c55628f7ef41a31ade"} Jan 26 00:22:46 crc kubenswrapper[5110]: I0126 00:22:46.974253 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-85c68dddb-jr7zr" podStartSLOduration=3.671097761 podStartE2EDuration="27.974219631s" podCreationTimestamp="2026-01-26 00:22:19 +0000 UTC" firstStartedPulling="2026-01-26 00:22:21.805061486 +0000 UTC m=+799.033960085" lastFinishedPulling="2026-01-26 00:22:46.108183336 +0000 UTC m=+823.337081955" observedRunningTime="2026-01-26 00:22:46.967630603 +0000 UTC m=+824.196529202" watchObservedRunningTime="2026-01-26 00:22:46.974219631 +0000 UTC m=+824.203118260" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.009467 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-hjwfj" podStartSLOduration=3.629905152 podStartE2EDuration="28.00943881s" podCreationTimestamp="2026-01-26 00:22:19 +0000 UTC" firstStartedPulling="2026-01-26 00:22:21.689928778 +0000 UTC m=+798.918827387" lastFinishedPulling="2026-01-26 00:22:46.069462436 +0000 UTC m=+823.298361045" observedRunningTime="2026-01-26 00:22:46.99476802 +0000 UTC m=+824.223666659" watchObservedRunningTime="2026-01-26 00:22:47.00943881 +0000 UTC m=+824.238337419" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.703785 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.704727 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="582e457d-8bb7-4c07-96f7-bbc77fce72cb" containerName="extract-content" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.704745 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="582e457d-8bb7-4c07-96f7-bbc77fce72cb" containerName="extract-content" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.704767 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="582e457d-8bb7-4c07-96f7-bbc77fce72cb" containerName="registry-server" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.704774 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="582e457d-8bb7-4c07-96f7-bbc77fce72cb" containerName="registry-server" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.704788 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b81e6f80-8f8a-4c31-a8aa-7f52a0e09541" containerName="registry-server" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.704810 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="b81e6f80-8f8a-4c31-a8aa-7f52a0e09541" containerName="registry-server" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.704820 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b81e6f80-8f8a-4c31-a8aa-7f52a0e09541" containerName="extract-utilities" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.704825 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="b81e6f80-8f8a-4c31-a8aa-7f52a0e09541" containerName="extract-utilities" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.704836 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="582e457d-8bb7-4c07-96f7-bbc77fce72cb" containerName="extract-utilities" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.704841 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="582e457d-8bb7-4c07-96f7-bbc77fce72cb" containerName="extract-utilities" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.704855 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b81e6f80-8f8a-4c31-a8aa-7f52a0e09541" containerName="extract-content" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.704862 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="b81e6f80-8f8a-4c31-a8aa-7f52a0e09541" containerName="extract-content" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.704958 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="b81e6f80-8f8a-4c31-a8aa-7f52a0e09541" containerName="registry-server" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.704971 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="582e457d-8bb7-4c07-96f7-bbc77fce72cb" containerName="registry-server" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.711301 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.713497 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.713973 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.714104 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.714125 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.714218 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-rn4r2\"" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.714593 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.714738 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.715685 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.717348 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.728228 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.800337 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/4bd7a874-fbbe-472e-bedc-fcb339de5b04-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.800386 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/4bd7a874-fbbe-472e-bedc-fcb339de5b04-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.800412 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/4bd7a874-fbbe-472e-bedc-fcb339de5b04-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.800446 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4bd7a874-fbbe-472e-bedc-fcb339de5b04-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.800466 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/4bd7a874-fbbe-472e-bedc-fcb339de5b04-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.800490 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/4bd7a874-fbbe-472e-bedc-fcb339de5b04-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.800504 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/4bd7a874-fbbe-472e-bedc-fcb339de5b04-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.800546 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/4bd7a874-fbbe-472e-bedc-fcb339de5b04-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.800566 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/4bd7a874-fbbe-472e-bedc-fcb339de5b04-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.800582 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/4bd7a874-fbbe-472e-bedc-fcb339de5b04-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.800606 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/4bd7a874-fbbe-472e-bedc-fcb339de5b04-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.800626 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/4bd7a874-fbbe-472e-bedc-fcb339de5b04-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.800681 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/4bd7a874-fbbe-472e-bedc-fcb339de5b04-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.800703 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/4bd7a874-fbbe-472e-bedc-fcb339de5b04-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.800725 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/4bd7a874-fbbe-472e-bedc-fcb339de5b04-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.901912 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/4bd7a874-fbbe-472e-bedc-fcb339de5b04-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.902043 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/4bd7a874-fbbe-472e-bedc-fcb339de5b04-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.902076 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/4bd7a874-fbbe-472e-bedc-fcb339de5b04-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.902104 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/4bd7a874-fbbe-472e-bedc-fcb339de5b04-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.902125 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/4bd7a874-fbbe-472e-bedc-fcb339de5b04-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.902370 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/4bd7a874-fbbe-472e-bedc-fcb339de5b04-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.902569 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4bd7a874-fbbe-472e-bedc-fcb339de5b04-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.902617 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/4bd7a874-fbbe-472e-bedc-fcb339de5b04-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.902686 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/4bd7a874-fbbe-472e-bedc-fcb339de5b04-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.902716 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/4bd7a874-fbbe-472e-bedc-fcb339de5b04-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.902921 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/4bd7a874-fbbe-472e-bedc-fcb339de5b04-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.902958 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/4bd7a874-fbbe-472e-bedc-fcb339de5b04-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.902966 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/4bd7a874-fbbe-472e-bedc-fcb339de5b04-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.902987 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/4bd7a874-fbbe-472e-bedc-fcb339de5b04-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.902562 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/4bd7a874-fbbe-472e-bedc-fcb339de5b04-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.903016 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4bd7a874-fbbe-472e-bedc-fcb339de5b04-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.902990 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/4bd7a874-fbbe-472e-bedc-fcb339de5b04-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.903182 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/4bd7a874-fbbe-472e-bedc-fcb339de5b04-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.903217 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/4bd7a874-fbbe-472e-bedc-fcb339de5b04-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.903321 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/4bd7a874-fbbe-472e-bedc-fcb339de5b04-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.903511 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/4bd7a874-fbbe-472e-bedc-fcb339de5b04-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.903900 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/4bd7a874-fbbe-472e-bedc-fcb339de5b04-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.904700 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/4bd7a874-fbbe-472e-bedc-fcb339de5b04-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.910978 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/4bd7a874-fbbe-472e-bedc-fcb339de5b04-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.910993 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/4bd7a874-fbbe-472e-bedc-fcb339de5b04-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.911117 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/4bd7a874-fbbe-472e-bedc-fcb339de5b04-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.911379 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/4bd7a874-fbbe-472e-bedc-fcb339de5b04-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.915579 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/4bd7a874-fbbe-472e-bedc-fcb339de5b04-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.916758 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/4bd7a874-fbbe-472e-bedc-fcb339de5b04-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.929928 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/4bd7a874-fbbe-472e-bedc-fcb339de5b04-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"4bd7a874-fbbe-472e-bedc-fcb339de5b04\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.969503 5110 generic.go:358] "Generic (PLEG): container finished" podID="857dd7a1-3d35-47a1-b2fa-5bcee0265262" containerID="8c005fb2f112bac20f2dd49415cd2f044144a50843453bef32bf67f0d6833799" exitCode=0 Jan 26 00:22:47 crc kubenswrapper[5110]: I0126 00:22:47.969892 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzl9z" event={"ID":"857dd7a1-3d35-47a1-b2fa-5bcee0265262","Type":"ContainerDied","Data":"8c005fb2f112bac20f2dd49415cd2f044144a50843453bef32bf67f0d6833799"} Jan 26 00:22:48 crc kubenswrapper[5110]: I0126 00:22:48.041317 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:48 crc kubenswrapper[5110]: I0126 00:22:48.462332 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 26 00:22:48 crc kubenswrapper[5110]: I0126 00:22:48.977824 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"4bd7a874-fbbe-472e-bedc-fcb339de5b04","Type":"ContainerStarted","Data":"4989723b9715a476003c7ba853b6484411e32a45836a74c8e80e5bfbcf8d313d"} Jan 26 00:22:49 crc kubenswrapper[5110]: I0126 00:22:49.265871 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzl9z" Jan 26 00:22:49 crc kubenswrapper[5110]: I0126 00:22:49.322298 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/857dd7a1-3d35-47a1-b2fa-5bcee0265262-util\") pod \"857dd7a1-3d35-47a1-b2fa-5bcee0265262\" (UID: \"857dd7a1-3d35-47a1-b2fa-5bcee0265262\") " Jan 26 00:22:49 crc kubenswrapper[5110]: I0126 00:22:49.322386 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/857dd7a1-3d35-47a1-b2fa-5bcee0265262-bundle\") pod \"857dd7a1-3d35-47a1-b2fa-5bcee0265262\" (UID: \"857dd7a1-3d35-47a1-b2fa-5bcee0265262\") " Jan 26 00:22:49 crc kubenswrapper[5110]: I0126 00:22:49.322541 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfc26\" (UniqueName: \"kubernetes.io/projected/857dd7a1-3d35-47a1-b2fa-5bcee0265262-kube-api-access-pfc26\") pod \"857dd7a1-3d35-47a1-b2fa-5bcee0265262\" (UID: \"857dd7a1-3d35-47a1-b2fa-5bcee0265262\") " Jan 26 00:22:49 crc kubenswrapper[5110]: I0126 00:22:49.324347 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/857dd7a1-3d35-47a1-b2fa-5bcee0265262-bundle" (OuterVolumeSpecName: "bundle") pod "857dd7a1-3d35-47a1-b2fa-5bcee0265262" (UID: "857dd7a1-3d35-47a1-b2fa-5bcee0265262"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:22:49 crc kubenswrapper[5110]: I0126 00:22:49.331645 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/857dd7a1-3d35-47a1-b2fa-5bcee0265262-kube-api-access-pfc26" (OuterVolumeSpecName: "kube-api-access-pfc26") pod "857dd7a1-3d35-47a1-b2fa-5bcee0265262" (UID: "857dd7a1-3d35-47a1-b2fa-5bcee0265262"). InnerVolumeSpecName "kube-api-access-pfc26". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:22:49 crc kubenswrapper[5110]: I0126 00:22:49.337488 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/857dd7a1-3d35-47a1-b2fa-5bcee0265262-util" (OuterVolumeSpecName: "util") pod "857dd7a1-3d35-47a1-b2fa-5bcee0265262" (UID: "857dd7a1-3d35-47a1-b2fa-5bcee0265262"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:22:49 crc kubenswrapper[5110]: I0126 00:22:49.423719 5110 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/857dd7a1-3d35-47a1-b2fa-5bcee0265262-util\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:49 crc kubenswrapper[5110]: I0126 00:22:49.423754 5110 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/857dd7a1-3d35-47a1-b2fa-5bcee0265262-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:49 crc kubenswrapper[5110]: I0126 00:22:49.423764 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pfc26\" (UniqueName: \"kubernetes.io/projected/857dd7a1-3d35-47a1-b2fa-5bcee0265262-kube-api-access-pfc26\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:49 crc kubenswrapper[5110]: I0126 00:22:49.990514 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzl9z" event={"ID":"857dd7a1-3d35-47a1-b2fa-5bcee0265262","Type":"ContainerDied","Data":"1e7f4b88ec089e27aa5c2864e25c1d38980d7584a407b38790484026a9fa2b95"} Jan 26 00:22:49 crc kubenswrapper[5110]: I0126 00:22:49.990573 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e7f4b88ec089e27aa5c2864e25c1d38980d7584a407b38790484026a9fa2b95" Jan 26 00:22:49 crc kubenswrapper[5110]: I0126 00:22:49.990611 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzl9z" Jan 26 00:22:55 crc kubenswrapper[5110]: I0126 00:22:55.941864 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-ffgjv"] Jan 26 00:22:55 crc kubenswrapper[5110]: I0126 00:22:55.943303 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="857dd7a1-3d35-47a1-b2fa-5bcee0265262" containerName="extract" Jan 26 00:22:55 crc kubenswrapper[5110]: I0126 00:22:55.943320 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="857dd7a1-3d35-47a1-b2fa-5bcee0265262" containerName="extract" Jan 26 00:22:55 crc kubenswrapper[5110]: I0126 00:22:55.943337 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="857dd7a1-3d35-47a1-b2fa-5bcee0265262" containerName="pull" Jan 26 00:22:55 crc kubenswrapper[5110]: I0126 00:22:55.943343 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="857dd7a1-3d35-47a1-b2fa-5bcee0265262" containerName="pull" Jan 26 00:22:55 crc kubenswrapper[5110]: I0126 00:22:55.943352 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="857dd7a1-3d35-47a1-b2fa-5bcee0265262" containerName="util" Jan 26 00:22:55 crc kubenswrapper[5110]: I0126 00:22:55.943360 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="857dd7a1-3d35-47a1-b2fa-5bcee0265262" containerName="util" Jan 26 00:22:55 crc kubenswrapper[5110]: I0126 00:22:55.943453 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="857dd7a1-3d35-47a1-b2fa-5bcee0265262" containerName="extract" Jan 26 00:22:55 crc kubenswrapper[5110]: I0126 00:22:55.960443 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-ffgjv"] Jan 26 00:22:55 crc kubenswrapper[5110]: I0126 00:22:55.960618 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-ffgjv" Jan 26 00:22:55 crc kubenswrapper[5110]: I0126 00:22:55.962629 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:22:55 crc kubenswrapper[5110]: I0126 00:22:55.962970 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-cl5fc\"" Jan 26 00:22:55 crc kubenswrapper[5110]: I0126 00:22:55.963251 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Jan 26 00:22:55 crc kubenswrapper[5110]: I0126 00:22:55.992746 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e39526aa-e508-4c71-8557-56c3efd64a06-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-ffgjv\" (UID: \"e39526aa-e508-4c71-8557-56c3efd64a06\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-ffgjv" Jan 26 00:22:55 crc kubenswrapper[5110]: I0126 00:22:55.992857 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gg75\" (UniqueName: \"kubernetes.io/projected/e39526aa-e508-4c71-8557-56c3efd64a06-kube-api-access-4gg75\") pod \"cert-manager-operator-controller-manager-64c74584c4-ffgjv\" (UID: \"e39526aa-e508-4c71-8557-56c3efd64a06\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-ffgjv" Jan 26 00:22:56 crc kubenswrapper[5110]: I0126 00:22:56.109026 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e39526aa-e508-4c71-8557-56c3efd64a06-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-ffgjv\" (UID: \"e39526aa-e508-4c71-8557-56c3efd64a06\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-ffgjv" Jan 26 00:22:56 crc kubenswrapper[5110]: I0126 00:22:56.109115 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4gg75\" (UniqueName: \"kubernetes.io/projected/e39526aa-e508-4c71-8557-56c3efd64a06-kube-api-access-4gg75\") pod \"cert-manager-operator-controller-manager-64c74584c4-ffgjv\" (UID: \"e39526aa-e508-4c71-8557-56c3efd64a06\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-ffgjv" Jan 26 00:22:56 crc kubenswrapper[5110]: I0126 00:22:56.109827 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e39526aa-e508-4c71-8557-56c3efd64a06-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-ffgjv\" (UID: \"e39526aa-e508-4c71-8557-56c3efd64a06\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-ffgjv" Jan 26 00:22:56 crc kubenswrapper[5110]: I0126 00:22:56.160744 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gg75\" (UniqueName: \"kubernetes.io/projected/e39526aa-e508-4c71-8557-56c3efd64a06-kube-api-access-4gg75\") pod \"cert-manager-operator-controller-manager-64c74584c4-ffgjv\" (UID: \"e39526aa-e508-4c71-8557-56c3efd64a06\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-ffgjv" Jan 26 00:22:56 crc kubenswrapper[5110]: I0126 00:22:56.313099 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-ffgjv" Jan 26 00:22:58 crc kubenswrapper[5110]: I0126 00:22:58.985144 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-669c9f96b5-5jr67" Jan 26 00:23:14 crc kubenswrapper[5110]: I0126 00:23:14.736976 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 26 00:23:14 crc kubenswrapper[5110]: I0126 00:23:14.830294 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:23:14 crc kubenswrapper[5110]: I0126 00:23:14.832589 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-ng4dn\"" Jan 26 00:23:14 crc kubenswrapper[5110]: I0126 00:23:14.834764 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-ca\"" Jan 26 00:23:14 crc kubenswrapper[5110]: I0126 00:23:14.834886 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-sys-config\"" Jan 26 00:23:14 crc kubenswrapper[5110]: I0126 00:23:14.835741 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-global-ca\"" Jan 26 00:23:14 crc kubenswrapper[5110]: I0126 00:23:14.934897 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ng4dn-pull\" (UniqueName: \"kubernetes.io/secret/a3d7a049-60f0-421c-9ba4-20222bf896c1-builder-dockercfg-ng4dn-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:23:14 crc kubenswrapper[5110]: I0126 00:23:14.934955 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a3d7a049-60f0-421c-9ba4-20222bf896c1-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:23:14 crc kubenswrapper[5110]: I0126 00:23:14.934986 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kz9j\" (UniqueName: \"kubernetes.io/projected/a3d7a049-60f0-421c-9ba4-20222bf896c1-kube-api-access-9kz9j\") pod \"service-telemetry-operator-1-build\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:23:14 crc kubenswrapper[5110]: I0126 00:23:14.935023 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a3d7a049-60f0-421c-9ba4-20222bf896c1-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:23:14 crc kubenswrapper[5110]: I0126 00:23:14.935171 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a3d7a049-60f0-421c-9ba4-20222bf896c1-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:23:14 crc kubenswrapper[5110]: I0126 00:23:14.935262 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a3d7a049-60f0-421c-9ba4-20222bf896c1-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:23:14 crc kubenswrapper[5110]: I0126 00:23:14.935290 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a3d7a049-60f0-421c-9ba4-20222bf896c1-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:23:14 crc kubenswrapper[5110]: I0126 00:23:14.935323 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a3d7a049-60f0-421c-9ba4-20222bf896c1-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:23:14 crc kubenswrapper[5110]: I0126 00:23:14.935423 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ng4dn-push\" (UniqueName: \"kubernetes.io/secret/a3d7a049-60f0-421c-9ba4-20222bf896c1-builder-dockercfg-ng4dn-push\") pod \"service-telemetry-operator-1-build\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:23:14 crc kubenswrapper[5110]: I0126 00:23:14.935480 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a3d7a049-60f0-421c-9ba4-20222bf896c1-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:23:14 crc kubenswrapper[5110]: I0126 00:23:14.935530 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a3d7a049-60f0-421c-9ba4-20222bf896c1-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:23:14 crc kubenswrapper[5110]: I0126 00:23:14.935568 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a3d7a049-60f0-421c-9ba4-20222bf896c1-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:23:15 crc kubenswrapper[5110]: I0126 00:23:15.035001 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 26 00:23:15 crc kubenswrapper[5110]: I0126 00:23:15.037172 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ng4dn-pull\" (UniqueName: \"kubernetes.io/secret/a3d7a049-60f0-421c-9ba4-20222bf896c1-builder-dockercfg-ng4dn-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:23:15 crc kubenswrapper[5110]: I0126 00:23:15.037229 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a3d7a049-60f0-421c-9ba4-20222bf896c1-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:23:15 crc kubenswrapper[5110]: I0126 00:23:15.037255 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9kz9j\" (UniqueName: \"kubernetes.io/projected/a3d7a049-60f0-421c-9ba4-20222bf896c1-kube-api-access-9kz9j\") pod \"service-telemetry-operator-1-build\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:23:15 crc kubenswrapper[5110]: I0126 00:23:15.037462 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a3d7a049-60f0-421c-9ba4-20222bf896c1-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:23:15 crc kubenswrapper[5110]: I0126 00:23:15.037529 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a3d7a049-60f0-421c-9ba4-20222bf896c1-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:23:15 crc kubenswrapper[5110]: I0126 00:23:15.037582 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a3d7a049-60f0-421c-9ba4-20222bf896c1-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:23:15 crc kubenswrapper[5110]: I0126 00:23:15.037601 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a3d7a049-60f0-421c-9ba4-20222bf896c1-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:23:15 crc kubenswrapper[5110]: I0126 00:23:15.037648 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a3d7a049-60f0-421c-9ba4-20222bf896c1-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:23:15 crc kubenswrapper[5110]: I0126 00:23:15.037700 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ng4dn-push\" (UniqueName: \"kubernetes.io/secret/a3d7a049-60f0-421c-9ba4-20222bf896c1-builder-dockercfg-ng4dn-push\") pod \"service-telemetry-operator-1-build\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:23:15 crc kubenswrapper[5110]: I0126 00:23:15.037732 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a3d7a049-60f0-421c-9ba4-20222bf896c1-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:23:15 crc kubenswrapper[5110]: I0126 00:23:15.037764 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a3d7a049-60f0-421c-9ba4-20222bf896c1-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:23:15 crc kubenswrapper[5110]: I0126 00:23:15.037788 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a3d7a049-60f0-421c-9ba4-20222bf896c1-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:23:15 crc kubenswrapper[5110]: I0126 00:23:15.038281 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a3d7a049-60f0-421c-9ba4-20222bf896c1-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:23:15 crc kubenswrapper[5110]: I0126 00:23:15.038336 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a3d7a049-60f0-421c-9ba4-20222bf896c1-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:23:15 crc kubenswrapper[5110]: I0126 00:23:15.038381 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a3d7a049-60f0-421c-9ba4-20222bf896c1-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:23:15 crc kubenswrapper[5110]: I0126 00:23:15.038632 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a3d7a049-60f0-421c-9ba4-20222bf896c1-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:23:15 crc kubenswrapper[5110]: I0126 00:23:15.038689 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a3d7a049-60f0-421c-9ba4-20222bf896c1-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:23:15 crc kubenswrapper[5110]: I0126 00:23:15.038781 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a3d7a049-60f0-421c-9ba4-20222bf896c1-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:23:15 crc kubenswrapper[5110]: I0126 00:23:15.038840 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a3d7a049-60f0-421c-9ba4-20222bf896c1-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:23:15 crc kubenswrapper[5110]: I0126 00:23:15.038920 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a3d7a049-60f0-421c-9ba4-20222bf896c1-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:23:15 crc kubenswrapper[5110]: I0126 00:23:15.039731 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a3d7a049-60f0-421c-9ba4-20222bf896c1-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:23:15 crc kubenswrapper[5110]: I0126 00:23:15.046176 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ng4dn-push\" (UniqueName: \"kubernetes.io/secret/a3d7a049-60f0-421c-9ba4-20222bf896c1-builder-dockercfg-ng4dn-push\") pod \"service-telemetry-operator-1-build\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:23:15 crc kubenswrapper[5110]: I0126 00:23:15.053424 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ng4dn-pull\" (UniqueName: \"kubernetes.io/secret/a3d7a049-60f0-421c-9ba4-20222bf896c1-builder-dockercfg-ng4dn-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:23:15 crc kubenswrapper[5110]: I0126 00:23:15.058689 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kz9j\" (UniqueName: \"kubernetes.io/projected/a3d7a049-60f0-421c-9ba4-20222bf896c1-kube-api-access-9kz9j\") pod \"service-telemetry-operator-1-build\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:23:15 crc kubenswrapper[5110]: I0126 00:23:15.174381 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:23:16 crc kubenswrapper[5110]: I0126 00:23:16.958190 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 26 00:23:17 crc kubenswrapper[5110]: I0126 00:23:17.099707 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-ffgjv"] Jan 26 00:23:17 crc kubenswrapper[5110]: I0126 00:23:17.457731 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-gmzn5" event={"ID":"1b06d07c-855b-463f-aba7-16ab95dba6bc","Type":"ContainerStarted","Data":"586a90d2eec5d4a5f7e5d4a195b2088661ddf6b3cb7c155aaf1ae04ebbb79a40"} Jan 26 00:23:17 crc kubenswrapper[5110]: I0126 00:23:17.460245 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"4bd7a874-fbbe-472e-bedc-fcb339de5b04","Type":"ContainerStarted","Data":"5ee3fbb711563c7e7ab618edeb9857d970547dc78e8b09ab4c75652c5f2c5508"} Jan 26 00:23:17 crc kubenswrapper[5110]: I0126 00:23:17.461194 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"a3d7a049-60f0-421c-9ba4-20222bf896c1","Type":"ContainerStarted","Data":"7df78a91d13161ef22eb1ba47799b990e8a0e75916cb8d9d99476bea3a53c839"} Jan 26 00:23:17 crc kubenswrapper[5110]: I0126 00:23:17.464291 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-ffgjv" event={"ID":"e39526aa-e508-4c71-8557-56c3efd64a06","Type":"ContainerStarted","Data":"ababd4c2f3d2257d0a5d588fd2e6063629d88f5ff7a421180e2efa6225ea1e16"} Jan 26 00:23:17 crc kubenswrapper[5110]: I0126 00:23:17.476643 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-78b9bd8798-gmzn5" podStartSLOduration=23.263673429 podStartE2EDuration="53.476628649s" podCreationTimestamp="2026-01-26 00:22:24 +0000 UTC" firstStartedPulling="2026-01-26 00:22:46.229206782 +0000 UTC m=+823.458105391" lastFinishedPulling="2026-01-26 00:23:16.442161992 +0000 UTC m=+853.671060611" observedRunningTime="2026-01-26 00:23:17.471884514 +0000 UTC m=+854.700783123" watchObservedRunningTime="2026-01-26 00:23:17.476628649 +0000 UTC m=+854.705527258" Jan 26 00:23:17 crc kubenswrapper[5110]: I0126 00:23:17.597551 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 26 00:23:17 crc kubenswrapper[5110]: I0126 00:23:17.624599 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 26 00:23:19 crc kubenswrapper[5110]: I0126 00:23:19.480438 5110 generic.go:358] "Generic (PLEG): container finished" podID="4bd7a874-fbbe-472e-bedc-fcb339de5b04" containerID="5ee3fbb711563c7e7ab618edeb9857d970547dc78e8b09ab4c75652c5f2c5508" exitCode=0 Jan 26 00:23:19 crc kubenswrapper[5110]: I0126 00:23:19.480606 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"4bd7a874-fbbe-472e-bedc-fcb339de5b04","Type":"ContainerDied","Data":"5ee3fbb711563c7e7ab618edeb9857d970547dc78e8b09ab4c75652c5f2c5508"} Jan 26 00:23:20 crc kubenswrapper[5110]: I0126 00:23:20.492370 5110 generic.go:358] "Generic (PLEG): container finished" podID="4bd7a874-fbbe-472e-bedc-fcb339de5b04" containerID="2d0009d69a7ef07227b18611b797ce9c7517bcfb44a1c3870ffa36d53f783409" exitCode=0 Jan 26 00:23:20 crc kubenswrapper[5110]: I0126 00:23:20.493144 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"4bd7a874-fbbe-472e-bedc-fcb339de5b04","Type":"ContainerDied","Data":"2d0009d69a7ef07227b18611b797ce9c7517bcfb44a1c3870ffa36d53f783409"} Jan 26 00:23:25 crc kubenswrapper[5110]: I0126 00:23:25.211728 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 26 00:23:26 crc kubenswrapper[5110]: I0126 00:23:26.813351 5110 patch_prober.go:28] interesting pod/machine-config-daemon-c6tpr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:23:26 crc kubenswrapper[5110]: I0126 00:23:26.813452 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" podUID="f15bed73-d669-439f-9828-7b952d9bfe65" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.009272 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.198182 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.198438 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.200900 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-ca\"" Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.201026 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-sys-config\"" Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.201032 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-global-ca\"" Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.217003 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0ec8732b-c9f0-4b4e-8774-673da5c59114-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.217057 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ec8732b-c9f0-4b4e-8774-673da5c59114-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.217094 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwktw\" (UniqueName: \"kubernetes.io/projected/0ec8732b-c9f0-4b4e-8774-673da5c59114-kube-api-access-pwktw\") pod \"service-telemetry-operator-2-build\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.217156 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0ec8732b-c9f0-4b4e-8774-673da5c59114-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.217195 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0ec8732b-c9f0-4b4e-8774-673da5c59114-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.217234 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0ec8732b-c9f0-4b4e-8774-673da5c59114-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.217254 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0ec8732b-c9f0-4b4e-8774-673da5c59114-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.217279 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ec8732b-c9f0-4b4e-8774-673da5c59114-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.217350 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ng4dn-pull\" (UniqueName: \"kubernetes.io/secret/0ec8732b-c9f0-4b4e-8774-673da5c59114-builder-dockercfg-ng4dn-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.217385 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0ec8732b-c9f0-4b4e-8774-673da5c59114-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.217418 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0ec8732b-c9f0-4b4e-8774-673da5c59114-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.217445 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ng4dn-push\" (UniqueName: \"kubernetes.io/secret/0ec8732b-c9f0-4b4e-8774-673da5c59114-builder-dockercfg-ng4dn-push\") pod \"service-telemetry-operator-2-build\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.318957 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0ec8732b-c9f0-4b4e-8774-673da5c59114-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.319017 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ec8732b-c9f0-4b4e-8774-673da5c59114-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.319053 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pwktw\" (UniqueName: \"kubernetes.io/projected/0ec8732b-c9f0-4b4e-8774-673da5c59114-kube-api-access-pwktw\") pod \"service-telemetry-operator-2-build\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.319467 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0ec8732b-c9f0-4b4e-8774-673da5c59114-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.319514 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0ec8732b-c9f0-4b4e-8774-673da5c59114-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.319555 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0ec8732b-c9f0-4b4e-8774-673da5c59114-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.319576 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0ec8732b-c9f0-4b4e-8774-673da5c59114-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.319605 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ec8732b-c9f0-4b4e-8774-673da5c59114-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.319618 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0ec8732b-c9f0-4b4e-8774-673da5c59114-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.319635 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ng4dn-pull\" (UniqueName: \"kubernetes.io/secret/0ec8732b-c9f0-4b4e-8774-673da5c59114-builder-dockercfg-ng4dn-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.319724 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0ec8732b-c9f0-4b4e-8774-673da5c59114-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.319785 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0ec8732b-c9f0-4b4e-8774-673da5c59114-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.319834 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ng4dn-push\" (UniqueName: \"kubernetes.io/secret/0ec8732b-c9f0-4b4e-8774-673da5c59114-builder-dockercfg-ng4dn-push\") pod \"service-telemetry-operator-2-build\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.319869 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0ec8732b-c9f0-4b4e-8774-673da5c59114-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.320019 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0ec8732b-c9f0-4b4e-8774-673da5c59114-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.320033 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0ec8732b-c9f0-4b4e-8774-673da5c59114-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.320603 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0ec8732b-c9f0-4b4e-8774-673da5c59114-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.320901 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0ec8732b-c9f0-4b4e-8774-673da5c59114-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.321423 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0ec8732b-c9f0-4b4e-8774-673da5c59114-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.322004 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ec8732b-c9f0-4b4e-8774-673da5c59114-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.322449 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ec8732b-c9f0-4b4e-8774-673da5c59114-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.328838 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ng4dn-push\" (UniqueName: \"kubernetes.io/secret/0ec8732b-c9f0-4b4e-8774-673da5c59114-builder-dockercfg-ng4dn-push\") pod \"service-telemetry-operator-2-build\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.328895 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ng4dn-pull\" (UniqueName: \"kubernetes.io/secret/0ec8732b-c9f0-4b4e-8774-673da5c59114-builder-dockercfg-ng4dn-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.340886 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwktw\" (UniqueName: \"kubernetes.io/projected/0ec8732b-c9f0-4b4e-8774-673da5c59114-kube-api-access-pwktw\") pod \"service-telemetry-operator-2-build\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:23:27 crc kubenswrapper[5110]: I0126 00:23:27.522345 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:23:29 crc kubenswrapper[5110]: I0126 00:23:29.727846 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-ffgjv" event={"ID":"e39526aa-e508-4c71-8557-56c3efd64a06","Type":"ContainerStarted","Data":"b99ac835bde4223f9e3ab70e7c42001c20a88cacf6b67468c5d3fd0ae7dc752a"} Jan 26 00:23:29 crc kubenswrapper[5110]: I0126 00:23:29.731396 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"4bd7a874-fbbe-472e-bedc-fcb339de5b04","Type":"ContainerStarted","Data":"15466754365f7fc1d482ccca3f76d40312771a022f9c5e21836b6dc837592044"} Jan 26 00:23:29 crc kubenswrapper[5110]: I0126 00:23:29.731685 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:23:29 crc kubenswrapper[5110]: I0126 00:23:29.750425 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 26 00:23:29 crc kubenswrapper[5110]: I0126 00:23:29.751718 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-ffgjv" podStartSLOduration=22.693171035 podStartE2EDuration="34.751706252s" podCreationTimestamp="2026-01-26 00:22:55 +0000 UTC" firstStartedPulling="2026-01-26 00:23:17.123981867 +0000 UTC m=+854.352880486" lastFinishedPulling="2026-01-26 00:23:29.182517104 +0000 UTC m=+866.411415703" observedRunningTime="2026-01-26 00:23:29.750790496 +0000 UTC m=+866.979689125" watchObservedRunningTime="2026-01-26 00:23:29.751706252 +0000 UTC m=+866.980604861" Jan 26 00:23:29 crc kubenswrapper[5110]: W0126 00:23:29.762436 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ec8732b_c9f0_4b4e_8774_673da5c59114.slice/crio-0cc444e611b483a35b5d31842c4f3502651d3920e50d7c6dfca5bd1f9d6b70f6 WatchSource:0}: Error finding container 0cc444e611b483a35b5d31842c4f3502651d3920e50d7c6dfca5bd1f9d6b70f6: Status 404 returned error can't find the container with id 0cc444e611b483a35b5d31842c4f3502651d3920e50d7c6dfca5bd1f9d6b70f6 Jan 26 00:23:29 crc kubenswrapper[5110]: I0126 00:23:29.800144 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=14.241222582 podStartE2EDuration="42.800123727s" podCreationTimestamp="2026-01-26 00:22:47 +0000 UTC" firstStartedPulling="2026-01-26 00:22:48.474364458 +0000 UTC m=+825.703263087" lastFinishedPulling="2026-01-26 00:23:17.033265633 +0000 UTC m=+854.262164232" observedRunningTime="2026-01-26 00:23:29.791687187 +0000 UTC m=+867.020585816" watchObservedRunningTime="2026-01-26 00:23:29.800123727 +0000 UTC m=+867.029022346" Jan 26 00:23:30 crc kubenswrapper[5110]: I0126 00:23:30.739536 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"0ec8732b-c9f0-4b4e-8774-673da5c59114","Type":"ContainerStarted","Data":"954206f3371245d9f3ff649d27ddfdd424de9ecd90dfce1619cab303796c5321"} Jan 26 00:23:30 crc kubenswrapper[5110]: I0126 00:23:30.740132 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"0ec8732b-c9f0-4b4e-8774-673da5c59114","Type":"ContainerStarted","Data":"0cc444e611b483a35b5d31842c4f3502651d3920e50d7c6dfca5bd1f9d6b70f6"} Jan 26 00:23:30 crc kubenswrapper[5110]: I0126 00:23:30.741288 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"a3d7a049-60f0-421c-9ba4-20222bf896c1","Type":"ContainerStarted","Data":"0a75efba074a655ca2fa41a185a942e96c4160a865e2e09e9a532a138d5a334f"} Jan 26 00:23:30 crc kubenswrapper[5110]: I0126 00:23:30.741457 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-1-build" podUID="a3d7a049-60f0-421c-9ba4-20222bf896c1" containerName="manage-dockerfile" containerID="cri-o://0a75efba074a655ca2fa41a185a942e96c4160a865e2e09e9a532a138d5a334f" gracePeriod=30 Jan 26 00:23:31 crc kubenswrapper[5110]: I0126 00:23:31.771787 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_a3d7a049-60f0-421c-9ba4-20222bf896c1/manage-dockerfile/0.log" Jan 26 00:23:31 crc kubenswrapper[5110]: I0126 00:23:31.772492 5110 generic.go:358] "Generic (PLEG): container finished" podID="a3d7a049-60f0-421c-9ba4-20222bf896c1" containerID="0a75efba074a655ca2fa41a185a942e96c4160a865e2e09e9a532a138d5a334f" exitCode=1 Jan 26 00:23:31 crc kubenswrapper[5110]: I0126 00:23:31.773573 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"a3d7a049-60f0-421c-9ba4-20222bf896c1","Type":"ContainerDied","Data":"0a75efba074a655ca2fa41a185a942e96c4160a865e2e09e9a532a138d5a334f"} Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.029971 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_a3d7a049-60f0-421c-9ba4-20222bf896c1/manage-dockerfile/0.log" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.030540 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.099179 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a3d7a049-60f0-421c-9ba4-20222bf896c1-buildworkdir\") pod \"a3d7a049-60f0-421c-9ba4-20222bf896c1\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.099630 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a3d7a049-60f0-421c-9ba4-20222bf896c1-build-proxy-ca-bundles\") pod \"a3d7a049-60f0-421c-9ba4-20222bf896c1\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.099750 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a3d7a049-60f0-421c-9ba4-20222bf896c1-node-pullsecrets\") pod \"a3d7a049-60f0-421c-9ba4-20222bf896c1\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.099897 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kz9j\" (UniqueName: \"kubernetes.io/projected/a3d7a049-60f0-421c-9ba4-20222bf896c1-kube-api-access-9kz9j\") pod \"a3d7a049-60f0-421c-9ba4-20222bf896c1\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.099990 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ng4dn-pull\" (UniqueName: \"kubernetes.io/secret/a3d7a049-60f0-421c-9ba4-20222bf896c1-builder-dockercfg-ng4dn-pull\") pod \"a3d7a049-60f0-421c-9ba4-20222bf896c1\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.100118 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ng4dn-push\" (UniqueName: \"kubernetes.io/secret/a3d7a049-60f0-421c-9ba4-20222bf896c1-builder-dockercfg-ng4dn-push\") pod \"a3d7a049-60f0-421c-9ba4-20222bf896c1\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.100229 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a3d7a049-60f0-421c-9ba4-20222bf896c1-build-ca-bundles\") pod \"a3d7a049-60f0-421c-9ba4-20222bf896c1\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.100306 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a3d7a049-60f0-421c-9ba4-20222bf896c1-build-system-configs\") pod \"a3d7a049-60f0-421c-9ba4-20222bf896c1\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.100421 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a3d7a049-60f0-421c-9ba4-20222bf896c1-buildcachedir\") pod \"a3d7a049-60f0-421c-9ba4-20222bf896c1\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.100520 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a3d7a049-60f0-421c-9ba4-20222bf896c1-container-storage-root\") pod \"a3d7a049-60f0-421c-9ba4-20222bf896c1\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.100610 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a3d7a049-60f0-421c-9ba4-20222bf896c1-container-storage-run\") pod \"a3d7a049-60f0-421c-9ba4-20222bf896c1\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.100739 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a3d7a049-60f0-421c-9ba4-20222bf896c1-build-blob-cache\") pod \"a3d7a049-60f0-421c-9ba4-20222bf896c1\" (UID: \"a3d7a049-60f0-421c-9ba4-20222bf896c1\") " Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.102065 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3d7a049-60f0-421c-9ba4-20222bf896c1-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "a3d7a049-60f0-421c-9ba4-20222bf896c1" (UID: "a3d7a049-60f0-421c-9ba4-20222bf896c1"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.102484 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3d7a049-60f0-421c-9ba4-20222bf896c1-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "a3d7a049-60f0-421c-9ba4-20222bf896c1" (UID: "a3d7a049-60f0-421c-9ba4-20222bf896c1"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.103252 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3d7a049-60f0-421c-9ba4-20222bf896c1-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "a3d7a049-60f0-421c-9ba4-20222bf896c1" (UID: "a3d7a049-60f0-421c-9ba4-20222bf896c1"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.103354 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3d7a049-60f0-421c-9ba4-20222bf896c1-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "a3d7a049-60f0-421c-9ba4-20222bf896c1" (UID: "a3d7a049-60f0-421c-9ba4-20222bf896c1"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.104516 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3d7a049-60f0-421c-9ba4-20222bf896c1-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "a3d7a049-60f0-421c-9ba4-20222bf896c1" (UID: "a3d7a049-60f0-421c-9ba4-20222bf896c1"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.104546 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3d7a049-60f0-421c-9ba4-20222bf896c1-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "a3d7a049-60f0-421c-9ba4-20222bf896c1" (UID: "a3d7a049-60f0-421c-9ba4-20222bf896c1"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.104549 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3d7a049-60f0-421c-9ba4-20222bf896c1-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "a3d7a049-60f0-421c-9ba4-20222bf896c1" (UID: "a3d7a049-60f0-421c-9ba4-20222bf896c1"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.105205 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3d7a049-60f0-421c-9ba4-20222bf896c1-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "a3d7a049-60f0-421c-9ba4-20222bf896c1" (UID: "a3d7a049-60f0-421c-9ba4-20222bf896c1"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.105643 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3d7a049-60f0-421c-9ba4-20222bf896c1-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "a3d7a049-60f0-421c-9ba4-20222bf896c1" (UID: "a3d7a049-60f0-421c-9ba4-20222bf896c1"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.119945 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3d7a049-60f0-421c-9ba4-20222bf896c1-kube-api-access-9kz9j" (OuterVolumeSpecName: "kube-api-access-9kz9j") pod "a3d7a049-60f0-421c-9ba4-20222bf896c1" (UID: "a3d7a049-60f0-421c-9ba4-20222bf896c1"). InnerVolumeSpecName "kube-api-access-9kz9j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.121363 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3d7a049-60f0-421c-9ba4-20222bf896c1-builder-dockercfg-ng4dn-push" (OuterVolumeSpecName: "builder-dockercfg-ng4dn-push") pod "a3d7a049-60f0-421c-9ba4-20222bf896c1" (UID: "a3d7a049-60f0-421c-9ba4-20222bf896c1"). InnerVolumeSpecName "builder-dockercfg-ng4dn-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.131114 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3d7a049-60f0-421c-9ba4-20222bf896c1-builder-dockercfg-ng4dn-pull" (OuterVolumeSpecName: "builder-dockercfg-ng4dn-pull") pod "a3d7a049-60f0-421c-9ba4-20222bf896c1" (UID: "a3d7a049-60f0-421c-9ba4-20222bf896c1"). InnerVolumeSpecName "builder-dockercfg-ng4dn-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.202171 5110 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a3d7a049-60f0-421c-9ba4-20222bf896c1-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.202975 5110 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a3d7a049-60f0-421c-9ba4-20222bf896c1-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.203061 5110 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a3d7a049-60f0-421c-9ba4-20222bf896c1-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.203121 5110 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a3d7a049-60f0-421c-9ba4-20222bf896c1-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.203190 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9kz9j\" (UniqueName: \"kubernetes.io/projected/a3d7a049-60f0-421c-9ba4-20222bf896c1-kube-api-access-9kz9j\") on node \"crc\" DevicePath \"\"" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.203255 5110 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-ng4dn-pull\" (UniqueName: \"kubernetes.io/secret/a3d7a049-60f0-421c-9ba4-20222bf896c1-builder-dockercfg-ng4dn-pull\") on node \"crc\" DevicePath \"\"" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.203317 5110 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-ng4dn-push\" (UniqueName: \"kubernetes.io/secret/a3d7a049-60f0-421c-9ba4-20222bf896c1-builder-dockercfg-ng4dn-push\") on node \"crc\" DevicePath \"\"" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.203383 5110 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a3d7a049-60f0-421c-9ba4-20222bf896c1-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.203457 5110 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a3d7a049-60f0-421c-9ba4-20222bf896c1-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.203522 5110 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a3d7a049-60f0-421c-9ba4-20222bf896c1-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.203579 5110 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a3d7a049-60f0-421c-9ba4-20222bf896c1-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.203635 5110 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a3d7a049-60f0-421c-9ba4-20222bf896c1-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.434075 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-zptcb"] Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.434922 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a3d7a049-60f0-421c-9ba4-20222bf896c1" containerName="manage-dockerfile" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.434947 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3d7a049-60f0-421c-9ba4-20222bf896c1" containerName="manage-dockerfile" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.435084 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="a3d7a049-60f0-421c-9ba4-20222bf896c1" containerName="manage-dockerfile" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.512707 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-zptcb" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.514614 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-zptcb"] Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.516891 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.517560 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.518434 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-69cmp\"" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.613682 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/63a94427-5376-447b-af64-fbf2d9576a40-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-zptcb\" (UID: \"63a94427-5376-447b-af64-fbf2d9576a40\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-zptcb" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.613806 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28pww\" (UniqueName: \"kubernetes.io/projected/63a94427-5376-447b-af64-fbf2d9576a40-kube-api-access-28pww\") pod \"cert-manager-webhook-7894b5b9b4-zptcb\" (UID: \"63a94427-5376-447b-af64-fbf2d9576a40\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-zptcb" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.715786 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/63a94427-5376-447b-af64-fbf2d9576a40-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-zptcb\" (UID: \"63a94427-5376-447b-af64-fbf2d9576a40\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-zptcb" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.715957 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-28pww\" (UniqueName: \"kubernetes.io/projected/63a94427-5376-447b-af64-fbf2d9576a40-kube-api-access-28pww\") pod \"cert-manager-webhook-7894b5b9b4-zptcb\" (UID: \"63a94427-5376-447b-af64-fbf2d9576a40\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-zptcb" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.737898 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-28pww\" (UniqueName: \"kubernetes.io/projected/63a94427-5376-447b-af64-fbf2d9576a40-kube-api-access-28pww\") pod \"cert-manager-webhook-7894b5b9b4-zptcb\" (UID: \"63a94427-5376-447b-af64-fbf2d9576a40\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-zptcb" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.748650 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/63a94427-5376-447b-af64-fbf2d9576a40-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-zptcb\" (UID: \"63a94427-5376-447b-af64-fbf2d9576a40\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-zptcb" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.789335 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_a3d7a049-60f0-421c-9ba4-20222bf896c1/manage-dockerfile/0.log" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.789614 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"a3d7a049-60f0-421c-9ba4-20222bf896c1","Type":"ContainerDied","Data":"7df78a91d13161ef22eb1ba47799b990e8a0e75916cb8d9d99476bea3a53c839"} Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.789668 5110 scope.go:117] "RemoveContainer" containerID="0a75efba074a655ca2fa41a185a942e96c4160a865e2e09e9a532a138d5a334f" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.789875 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.816402 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.822860 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 26 00:23:33 crc kubenswrapper[5110]: I0126 00:23:33.849278 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-zptcb" Jan 26 00:23:35 crc kubenswrapper[5110]: I0126 00:23:35.333698 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3d7a049-60f0-421c-9ba4-20222bf896c1" path="/var/lib/kubelet/pods/a3d7a049-60f0-421c-9ba4-20222bf896c1/volumes" Jan 26 00:23:35 crc kubenswrapper[5110]: I0126 00:23:35.567458 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-58c5k"] Jan 26 00:23:35 crc kubenswrapper[5110]: I0126 00:23:35.578224 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-58c5k" Jan 26 00:23:35 crc kubenswrapper[5110]: I0126 00:23:35.580131 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-czz6d\"" Jan 26 00:23:35 crc kubenswrapper[5110]: I0126 00:23:35.581027 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-58c5k"] Jan 26 00:23:35 crc kubenswrapper[5110]: I0126 00:23:35.617312 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-zptcb"] Jan 26 00:23:35 crc kubenswrapper[5110]: I0126 00:23:35.650019 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/34d875fe-deae-45f9-8d5b-6e28c4b6138a-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-58c5k\" (UID: \"34d875fe-deae-45f9-8d5b-6e28c4b6138a\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-58c5k" Jan 26 00:23:35 crc kubenswrapper[5110]: I0126 00:23:35.650063 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54hnh\" (UniqueName: \"kubernetes.io/projected/34d875fe-deae-45f9-8d5b-6e28c4b6138a-kube-api-access-54hnh\") pod \"cert-manager-cainjector-7dbf76d5c8-58c5k\" (UID: \"34d875fe-deae-45f9-8d5b-6e28c4b6138a\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-58c5k" Jan 26 00:23:35 crc kubenswrapper[5110]: I0126 00:23:35.751672 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/34d875fe-deae-45f9-8d5b-6e28c4b6138a-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-58c5k\" (UID: \"34d875fe-deae-45f9-8d5b-6e28c4b6138a\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-58c5k" Jan 26 00:23:35 crc kubenswrapper[5110]: I0126 00:23:35.751761 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-54hnh\" (UniqueName: \"kubernetes.io/projected/34d875fe-deae-45f9-8d5b-6e28c4b6138a-kube-api-access-54hnh\") pod \"cert-manager-cainjector-7dbf76d5c8-58c5k\" (UID: \"34d875fe-deae-45f9-8d5b-6e28c4b6138a\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-58c5k" Jan 26 00:23:35 crc kubenswrapper[5110]: I0126 00:23:35.773751 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/34d875fe-deae-45f9-8d5b-6e28c4b6138a-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-58c5k\" (UID: \"34d875fe-deae-45f9-8d5b-6e28c4b6138a\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-58c5k" Jan 26 00:23:35 crc kubenswrapper[5110]: I0126 00:23:35.774608 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-54hnh\" (UniqueName: \"kubernetes.io/projected/34d875fe-deae-45f9-8d5b-6e28c4b6138a-kube-api-access-54hnh\") pod \"cert-manager-cainjector-7dbf76d5c8-58c5k\" (UID: \"34d875fe-deae-45f9-8d5b-6e28c4b6138a\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-58c5k" Jan 26 00:23:35 crc kubenswrapper[5110]: I0126 00:23:35.853828 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-zptcb" event={"ID":"63a94427-5376-447b-af64-fbf2d9576a40","Type":"ContainerStarted","Data":"f6a21e917b5b5c1bc52980aa0035cf80a110d69c74bbfb581066a60ac05fba5f"} Jan 26 00:23:35 crc kubenswrapper[5110]: I0126 00:23:35.903693 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-58c5k" Jan 26 00:23:36 crc kubenswrapper[5110]: I0126 00:23:36.666757 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-58c5k"] Jan 26 00:23:36 crc kubenswrapper[5110]: W0126 00:23:36.684870 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34d875fe_deae_45f9_8d5b_6e28c4b6138a.slice/crio-64869890a9a3216673dd770b9b3fa122259399d0650895b1859266ab286a8474 WatchSource:0}: Error finding container 64869890a9a3216673dd770b9b3fa122259399d0650895b1859266ab286a8474: Status 404 returned error can't find the container with id 64869890a9a3216673dd770b9b3fa122259399d0650895b1859266ab286a8474 Jan 26 00:23:36 crc kubenswrapper[5110]: I0126 00:23:36.861586 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-58c5k" event={"ID":"34d875fe-deae-45f9-8d5b-6e28c4b6138a","Type":"ContainerStarted","Data":"64869890a9a3216673dd770b9b3fa122259399d0650895b1859266ab286a8474"} Jan 26 00:23:40 crc kubenswrapper[5110]: I0126 00:23:40.931903 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="4bd7a874-fbbe-472e-bedc-fcb339de5b04" containerName="elasticsearch" probeResult="failure" output=< Jan 26 00:23:40 crc kubenswrapper[5110]: {"timestamp": "2026-01-26T00:23:40+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 26 00:23:40 crc kubenswrapper[5110]: > Jan 26 00:23:43 crc kubenswrapper[5110]: I0126 00:23:43.947134 5110 generic.go:358] "Generic (PLEG): container finished" podID="0ec8732b-c9f0-4b4e-8774-673da5c59114" containerID="954206f3371245d9f3ff649d27ddfdd424de9ecd90dfce1619cab303796c5321" exitCode=0 Jan 26 00:23:43 crc kubenswrapper[5110]: I0126 00:23:43.947163 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"0ec8732b-c9f0-4b4e-8774-673da5c59114","Type":"ContainerDied","Data":"954206f3371245d9f3ff649d27ddfdd424de9ecd90dfce1619cab303796c5321"} Jan 26 00:23:45 crc kubenswrapper[5110]: I0126 00:23:45.953165 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="4bd7a874-fbbe-472e-bedc-fcb339de5b04" containerName="elasticsearch" probeResult="failure" output=< Jan 26 00:23:45 crc kubenswrapper[5110]: {"timestamp": "2026-01-26T00:23:45+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 26 00:23:45 crc kubenswrapper[5110]: > Jan 26 00:23:50 crc kubenswrapper[5110]: I0126 00:23:50.882677 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="4bd7a874-fbbe-472e-bedc-fcb339de5b04" containerName="elasticsearch" probeResult="failure" output=< Jan 26 00:23:50 crc kubenswrapper[5110]: {"timestamp": "2026-01-26T00:23:50+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 26 00:23:50 crc kubenswrapper[5110]: > Jan 26 00:23:52 crc kubenswrapper[5110]: I0126 00:23:52.427038 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858d87f86b-ttxf9"] Jan 26 00:23:52 crc kubenswrapper[5110]: I0126 00:23:52.557385 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-ttxf9"] Jan 26 00:23:52 crc kubenswrapper[5110]: I0126 00:23:52.557557 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-ttxf9" Jan 26 00:23:52 crc kubenswrapper[5110]: I0126 00:23:52.561502 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-cq7w7\"" Jan 26 00:23:52 crc kubenswrapper[5110]: I0126 00:23:52.656604 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsm9t\" (UniqueName: \"kubernetes.io/projected/d06bb016-995f-4bbb-a490-0b336707d443-kube-api-access-fsm9t\") pod \"cert-manager-858d87f86b-ttxf9\" (UID: \"d06bb016-995f-4bbb-a490-0b336707d443\") " pod="cert-manager/cert-manager-858d87f86b-ttxf9" Jan 26 00:23:52 crc kubenswrapper[5110]: I0126 00:23:52.656689 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d06bb016-995f-4bbb-a490-0b336707d443-bound-sa-token\") pod \"cert-manager-858d87f86b-ttxf9\" (UID: \"d06bb016-995f-4bbb-a490-0b336707d443\") " pod="cert-manager/cert-manager-858d87f86b-ttxf9" Jan 26 00:23:52 crc kubenswrapper[5110]: I0126 00:23:52.758354 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fsm9t\" (UniqueName: \"kubernetes.io/projected/d06bb016-995f-4bbb-a490-0b336707d443-kube-api-access-fsm9t\") pod \"cert-manager-858d87f86b-ttxf9\" (UID: \"d06bb016-995f-4bbb-a490-0b336707d443\") " pod="cert-manager/cert-manager-858d87f86b-ttxf9" Jan 26 00:23:52 crc kubenswrapper[5110]: I0126 00:23:52.758776 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d06bb016-995f-4bbb-a490-0b336707d443-bound-sa-token\") pod \"cert-manager-858d87f86b-ttxf9\" (UID: \"d06bb016-995f-4bbb-a490-0b336707d443\") " pod="cert-manager/cert-manager-858d87f86b-ttxf9" Jan 26 00:23:52 crc kubenswrapper[5110]: I0126 00:23:52.794225 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsm9t\" (UniqueName: \"kubernetes.io/projected/d06bb016-995f-4bbb-a490-0b336707d443-kube-api-access-fsm9t\") pod \"cert-manager-858d87f86b-ttxf9\" (UID: \"d06bb016-995f-4bbb-a490-0b336707d443\") " pod="cert-manager/cert-manager-858d87f86b-ttxf9" Jan 26 00:23:52 crc kubenswrapper[5110]: I0126 00:23:52.801921 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d06bb016-995f-4bbb-a490-0b336707d443-bound-sa-token\") pod \"cert-manager-858d87f86b-ttxf9\" (UID: \"d06bb016-995f-4bbb-a490-0b336707d443\") " pod="cert-manager/cert-manager-858d87f86b-ttxf9" Jan 26 00:23:52 crc kubenswrapper[5110]: I0126 00:23:52.880440 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-ttxf9" Jan 26 00:23:54 crc kubenswrapper[5110]: I0126 00:23:54.729440 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-ttxf9"] Jan 26 00:23:54 crc kubenswrapper[5110]: W0126 00:23:54.742536 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd06bb016_995f_4bbb_a490_0b336707d443.slice/crio-28f0d7ecef00ae29a35f99983d576f19dee54bc2f8fcacc0281aacf8ca139597 WatchSource:0}: Error finding container 28f0d7ecef00ae29a35f99983d576f19dee54bc2f8fcacc0281aacf8ca139597: Status 404 returned error can't find the container with id 28f0d7ecef00ae29a35f99983d576f19dee54bc2f8fcacc0281aacf8ca139597 Jan 26 00:23:55 crc kubenswrapper[5110]: I0126 00:23:55.330314 5110 generic.go:358] "Generic (PLEG): container finished" podID="0ec8732b-c9f0-4b4e-8774-673da5c59114" containerID="862ac31dcccd196b386d89ffe9d5063e1619bfb11ad9204f0911ca41572ff8ea" exitCode=0 Jan 26 00:23:55 crc kubenswrapper[5110]: I0126 00:23:55.330412 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"0ec8732b-c9f0-4b4e-8774-673da5c59114","Type":"ContainerDied","Data":"862ac31dcccd196b386d89ffe9d5063e1619bfb11ad9204f0911ca41572ff8ea"} Jan 26 00:23:55 crc kubenswrapper[5110]: I0126 00:23:55.332305 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-zptcb" event={"ID":"63a94427-5376-447b-af64-fbf2d9576a40","Type":"ContainerStarted","Data":"0b055edb91fa3182cd3cd9def7842e005dd4115cb2ac4fd61795ae32b72391e8"} Jan 26 00:23:55 crc kubenswrapper[5110]: I0126 00:23:55.332575 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-zptcb" Jan 26 00:23:55 crc kubenswrapper[5110]: I0126 00:23:55.338053 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-ttxf9" event={"ID":"d06bb016-995f-4bbb-a490-0b336707d443","Type":"ContainerStarted","Data":"b66a8bccd313b47b8cf9b3e6acde3f2b92c9c8ac3825676322060a15eabbcd0f"} Jan 26 00:23:55 crc kubenswrapper[5110]: I0126 00:23:55.338084 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-ttxf9" event={"ID":"d06bb016-995f-4bbb-a490-0b336707d443","Type":"ContainerStarted","Data":"28f0d7ecef00ae29a35f99983d576f19dee54bc2f8fcacc0281aacf8ca139597"} Jan 26 00:23:55 crc kubenswrapper[5110]: I0126 00:23:55.340460 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-58c5k" event={"ID":"34d875fe-deae-45f9-8d5b-6e28c4b6138a","Type":"ContainerStarted","Data":"3176b837d5e378651b12c525c5062352268fbc5936a50470ecc155ea0639b2e3"} Jan 26 00:23:55 crc kubenswrapper[5110]: I0126 00:23:55.391872 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858d87f86b-ttxf9" podStartSLOduration=3.391852924 podStartE2EDuration="3.391852924s" podCreationTimestamp="2026-01-26 00:23:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:23:55.390563357 +0000 UTC m=+892.619461966" watchObservedRunningTime="2026-01-26 00:23:55.391852924 +0000 UTC m=+892.620751533" Jan 26 00:23:55 crc kubenswrapper[5110]: I0126 00:23:55.396084 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_0ec8732b-c9f0-4b4e-8774-673da5c59114/manage-dockerfile/0.log" Jan 26 00:23:55 crc kubenswrapper[5110]: I0126 00:23:55.419648 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-58c5k" podStartSLOduration=3.198125046 podStartE2EDuration="20.419630372s" podCreationTimestamp="2026-01-26 00:23:35 +0000 UTC" firstStartedPulling="2026-01-26 00:23:36.686996767 +0000 UTC m=+873.915895376" lastFinishedPulling="2026-01-26 00:23:53.908502093 +0000 UTC m=+891.137400702" observedRunningTime="2026-01-26 00:23:55.415358411 +0000 UTC m=+892.644257040" watchObservedRunningTime="2026-01-26 00:23:55.419630372 +0000 UTC m=+892.648528991" Jan 26 00:23:55 crc kubenswrapper[5110]: I0126 00:23:55.439786 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-7894b5b9b4-zptcb" podStartSLOduration=4.18694753 podStartE2EDuration="22.439753853s" podCreationTimestamp="2026-01-26 00:23:33 +0000 UTC" firstStartedPulling="2026-01-26 00:23:35.629359612 +0000 UTC m=+872.858258221" lastFinishedPulling="2026-01-26 00:23:53.882165935 +0000 UTC m=+891.111064544" observedRunningTime="2026-01-26 00:23:55.435571955 +0000 UTC m=+892.664470584" watchObservedRunningTime="2026-01-26 00:23:55.439753853 +0000 UTC m=+892.668652462" Jan 26 00:23:55 crc kubenswrapper[5110]: I0126 00:23:55.848952 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="4bd7a874-fbbe-472e-bedc-fcb339de5b04" containerName="elasticsearch" probeResult="failure" output=< Jan 26 00:23:55 crc kubenswrapper[5110]: {"timestamp": "2026-01-26T00:23:55+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 26 00:23:55 crc kubenswrapper[5110]: > Jan 26 00:23:56 crc kubenswrapper[5110]: I0126 00:23:56.356776 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"0ec8732b-c9f0-4b4e-8774-673da5c59114","Type":"ContainerStarted","Data":"b4ebdadf9f3e896c9682a5ddb2d5c40d92ae4051de3f5faf7b84e948d7819c6a"} Jan 26 00:23:56 crc kubenswrapper[5110]: I0126 00:23:56.412052 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-2-build" podStartSLOduration=30.412029395 podStartE2EDuration="30.412029395s" podCreationTimestamp="2026-01-26 00:23:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:23:56.410596915 +0000 UTC m=+893.639495594" watchObservedRunningTime="2026-01-26 00:23:56.412029395 +0000 UTC m=+893.640928034" Jan 26 00:23:56 crc kubenswrapper[5110]: I0126 00:23:56.813079 5110 patch_prober.go:28] interesting pod/machine-config-daemon-c6tpr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:23:56 crc kubenswrapper[5110]: I0126 00:23:56.813150 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" podUID="f15bed73-d669-439f-9828-7b952d9bfe65" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:24:00 crc kubenswrapper[5110]: I0126 00:24:00.127675 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489784-hz7xq"] Jan 26 00:24:00 crc kubenswrapper[5110]: I0126 00:24:00.854272 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489784-hz7xq"] Jan 26 00:24:00 crc kubenswrapper[5110]: I0126 00:24:00.854481 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489784-hz7xq" Jan 26 00:24:00 crc kubenswrapper[5110]: I0126 00:24:00.856631 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:24:00 crc kubenswrapper[5110]: I0126 00:24:00.856786 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dfffg\"" Jan 26 00:24:00 crc kubenswrapper[5110]: I0126 00:24:00.857383 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:24:00 crc kubenswrapper[5110]: I0126 00:24:00.993417 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcn6n\" (UniqueName: \"kubernetes.io/projected/7352d7ab-22b1-4a56-9e90-4fc47ffead42-kube-api-access-pcn6n\") pod \"auto-csr-approver-29489784-hz7xq\" (UID: \"7352d7ab-22b1-4a56-9e90-4fc47ffead42\") " pod="openshift-infra/auto-csr-approver-29489784-hz7xq" Jan 26 00:24:01 crc kubenswrapper[5110]: I0126 00:24:01.095193 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pcn6n\" (UniqueName: \"kubernetes.io/projected/7352d7ab-22b1-4a56-9e90-4fc47ffead42-kube-api-access-pcn6n\") pod \"auto-csr-approver-29489784-hz7xq\" (UID: \"7352d7ab-22b1-4a56-9e90-4fc47ffead42\") " pod="openshift-infra/auto-csr-approver-29489784-hz7xq" Jan 26 00:24:01 crc kubenswrapper[5110]: I0126 00:24:01.117640 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcn6n\" (UniqueName: \"kubernetes.io/projected/7352d7ab-22b1-4a56-9e90-4fc47ffead42-kube-api-access-pcn6n\") pod \"auto-csr-approver-29489784-hz7xq\" (UID: \"7352d7ab-22b1-4a56-9e90-4fc47ffead42\") " pod="openshift-infra/auto-csr-approver-29489784-hz7xq" Jan 26 00:24:01 crc kubenswrapper[5110]: I0126 00:24:01.172753 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489784-hz7xq" Jan 26 00:24:01 crc kubenswrapper[5110]: I0126 00:24:01.249494 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:24:01 crc kubenswrapper[5110]: I0126 00:24:01.362912 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-zptcb" Jan 26 00:24:01 crc kubenswrapper[5110]: I0126 00:24:01.500259 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489784-hz7xq"] Jan 26 00:24:02 crc kubenswrapper[5110]: I0126 00:24:02.548642 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489784-hz7xq" event={"ID":"7352d7ab-22b1-4a56-9e90-4fc47ffead42","Type":"ContainerStarted","Data":"5828447bec6b08a2ef389cd73c8f27719bcbad84f8612ea337abe39125d04008"} Jan 26 00:24:03 crc kubenswrapper[5110]: I0126 00:24:03.636227 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jh4hk_f2948d2b-fac7-4f3f-8b5f-f6f9c914daec/kube-multus/0.log" Jan 26 00:24:03 crc kubenswrapper[5110]: I0126 00:24:03.645375 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 26 00:24:03 crc kubenswrapper[5110]: I0126 00:24:03.651911 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jh4hk_f2948d2b-fac7-4f3f-8b5f-f6f9c914daec/kube-multus/0.log" Jan 26 00:24:03 crc kubenswrapper[5110]: I0126 00:24:03.656580 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 26 00:24:04 crc kubenswrapper[5110]: I0126 00:24:04.569084 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489784-hz7xq" event={"ID":"7352d7ab-22b1-4a56-9e90-4fc47ffead42","Type":"ContainerStarted","Data":"7a4401a4ad33e42d6d1493799097d49f967bdd77ec8a82024579594828e49f49"} Jan 26 00:24:04 crc kubenswrapper[5110]: I0126 00:24:04.586157 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29489784-hz7xq" podStartSLOduration=3.744513784 podStartE2EDuration="4.586136647s" podCreationTimestamp="2026-01-26 00:24:00 +0000 UTC" firstStartedPulling="2026-01-26 00:24:01.509786763 +0000 UTC m=+898.738685372" lastFinishedPulling="2026-01-26 00:24:02.351409626 +0000 UTC m=+899.580308235" observedRunningTime="2026-01-26 00:24:04.5813205 +0000 UTC m=+901.810219119" watchObservedRunningTime="2026-01-26 00:24:04.586136647 +0000 UTC m=+901.815035266" Jan 26 00:24:05 crc kubenswrapper[5110]: I0126 00:24:05.578629 5110 generic.go:358] "Generic (PLEG): container finished" podID="7352d7ab-22b1-4a56-9e90-4fc47ffead42" containerID="7a4401a4ad33e42d6d1493799097d49f967bdd77ec8a82024579594828e49f49" exitCode=0 Jan 26 00:24:05 crc kubenswrapper[5110]: I0126 00:24:05.578749 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489784-hz7xq" event={"ID":"7352d7ab-22b1-4a56-9e90-4fc47ffead42","Type":"ContainerDied","Data":"7a4401a4ad33e42d6d1493799097d49f967bdd77ec8a82024579594828e49f49"} Jan 26 00:24:07 crc kubenswrapper[5110]: I0126 00:24:07.143975 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489784-hz7xq" Jan 26 00:24:07 crc kubenswrapper[5110]: I0126 00:24:07.196673 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcn6n\" (UniqueName: \"kubernetes.io/projected/7352d7ab-22b1-4a56-9e90-4fc47ffead42-kube-api-access-pcn6n\") pod \"7352d7ab-22b1-4a56-9e90-4fc47ffead42\" (UID: \"7352d7ab-22b1-4a56-9e90-4fc47ffead42\") " Jan 26 00:24:07 crc kubenswrapper[5110]: I0126 00:24:07.205346 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7352d7ab-22b1-4a56-9e90-4fc47ffead42-kube-api-access-pcn6n" (OuterVolumeSpecName: "kube-api-access-pcn6n") pod "7352d7ab-22b1-4a56-9e90-4fc47ffead42" (UID: "7352d7ab-22b1-4a56-9e90-4fc47ffead42"). InnerVolumeSpecName "kube-api-access-pcn6n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:24:07 crc kubenswrapper[5110]: I0126 00:24:07.298468 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pcn6n\" (UniqueName: \"kubernetes.io/projected/7352d7ab-22b1-4a56-9e90-4fc47ffead42-kube-api-access-pcn6n\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:07 crc kubenswrapper[5110]: I0126 00:24:07.594905 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489784-hz7xq" event={"ID":"7352d7ab-22b1-4a56-9e90-4fc47ffead42","Type":"ContainerDied","Data":"5828447bec6b08a2ef389cd73c8f27719bcbad84f8612ea337abe39125d04008"} Jan 26 00:24:07 crc kubenswrapper[5110]: I0126 00:24:07.595010 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5828447bec6b08a2ef389cd73c8f27719bcbad84f8612ea337abe39125d04008" Jan 26 00:24:07 crc kubenswrapper[5110]: I0126 00:24:07.594951 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489784-hz7xq" Jan 26 00:24:07 crc kubenswrapper[5110]: I0126 00:24:07.658652 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29489778-mg6x5"] Jan 26 00:24:07 crc kubenswrapper[5110]: I0126 00:24:07.662836 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29489778-mg6x5"] Jan 26 00:24:09 crc kubenswrapper[5110]: I0126 00:24:09.335060 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e90be102-be86-406a-a029-8fc2e04db1c6" path="/var/lib/kubelet/pods/e90be102-be86-406a-a029-8fc2e04db1c6/volumes" Jan 26 00:24:16 crc kubenswrapper[5110]: I0126 00:24:16.374532 5110 scope.go:117] "RemoveContainer" containerID="3bd4b42f9906e7eed79a8077d14be83722f741eec3dff145ee96fc3296a86085" Jan 26 00:24:26 crc kubenswrapper[5110]: I0126 00:24:26.812744 5110 patch_prober.go:28] interesting pod/machine-config-daemon-c6tpr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:24:26 crc kubenswrapper[5110]: I0126 00:24:26.813691 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" podUID="f15bed73-d669-439f-9828-7b952d9bfe65" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:24:26 crc kubenswrapper[5110]: I0126 00:24:26.813769 5110 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" Jan 26 00:24:26 crc kubenswrapper[5110]: I0126 00:24:26.814667 5110 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f81b2a95bda7dc33d66a95aaff56fe2d3c57ed6f0906b6eb9d8b3b10b83a4ccf"} pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 00:24:26 crc kubenswrapper[5110]: I0126 00:24:26.814744 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" podUID="f15bed73-d669-439f-9828-7b952d9bfe65" containerName="machine-config-daemon" containerID="cri-o://f81b2a95bda7dc33d66a95aaff56fe2d3c57ed6f0906b6eb9d8b3b10b83a4ccf" gracePeriod=600 Jan 26 00:24:27 crc kubenswrapper[5110]: I0126 00:24:27.767908 5110 generic.go:358] "Generic (PLEG): container finished" podID="f15bed73-d669-439f-9828-7b952d9bfe65" containerID="f81b2a95bda7dc33d66a95aaff56fe2d3c57ed6f0906b6eb9d8b3b10b83a4ccf" exitCode=0 Jan 26 00:24:27 crc kubenswrapper[5110]: I0126 00:24:27.767994 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" event={"ID":"f15bed73-d669-439f-9828-7b952d9bfe65","Type":"ContainerDied","Data":"f81b2a95bda7dc33d66a95aaff56fe2d3c57ed6f0906b6eb9d8b3b10b83a4ccf"} Jan 26 00:24:27 crc kubenswrapper[5110]: I0126 00:24:27.768498 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" event={"ID":"f15bed73-d669-439f-9828-7b952d9bfe65","Type":"ContainerStarted","Data":"85c836c21afd57f854e547aa21c8b990ebf524babd5afa313958f93419d014e7"} Jan 26 00:24:27 crc kubenswrapper[5110]: I0126 00:24:27.768526 5110 scope.go:117] "RemoveContainer" containerID="8cf1a32c65b02796064cd080f35e06d7241ce6749daefe8e41aaf499a12db038" Jan 26 00:24:44 crc kubenswrapper[5110]: I0126 00:24:44.356057 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n6942"] Jan 26 00:24:44 crc kubenswrapper[5110]: I0126 00:24:44.358129 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7352d7ab-22b1-4a56-9e90-4fc47ffead42" containerName="oc" Jan 26 00:24:44 crc kubenswrapper[5110]: I0126 00:24:44.358155 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="7352d7ab-22b1-4a56-9e90-4fc47ffead42" containerName="oc" Jan 26 00:24:44 crc kubenswrapper[5110]: I0126 00:24:44.358418 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="7352d7ab-22b1-4a56-9e90-4fc47ffead42" containerName="oc" Jan 26 00:24:44 crc kubenswrapper[5110]: I0126 00:24:44.374684 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n6942" Jan 26 00:24:44 crc kubenswrapper[5110]: I0126 00:24:44.386044 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n6942"] Jan 26 00:24:44 crc kubenswrapper[5110]: I0126 00:24:44.431998 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw24k\" (UniqueName: \"kubernetes.io/projected/f4973ad6-0053-4e9b-874b-9a060374655a-kube-api-access-mw24k\") pod \"community-operators-n6942\" (UID: \"f4973ad6-0053-4e9b-874b-9a060374655a\") " pod="openshift-marketplace/community-operators-n6942" Jan 26 00:24:44 crc kubenswrapper[5110]: I0126 00:24:44.432077 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4973ad6-0053-4e9b-874b-9a060374655a-catalog-content\") pod \"community-operators-n6942\" (UID: \"f4973ad6-0053-4e9b-874b-9a060374655a\") " pod="openshift-marketplace/community-operators-n6942" Jan 26 00:24:44 crc kubenswrapper[5110]: I0126 00:24:44.432121 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4973ad6-0053-4e9b-874b-9a060374655a-utilities\") pod \"community-operators-n6942\" (UID: \"f4973ad6-0053-4e9b-874b-9a060374655a\") " pod="openshift-marketplace/community-operators-n6942" Jan 26 00:24:44 crc kubenswrapper[5110]: I0126 00:24:44.534629 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mw24k\" (UniqueName: \"kubernetes.io/projected/f4973ad6-0053-4e9b-874b-9a060374655a-kube-api-access-mw24k\") pod \"community-operators-n6942\" (UID: \"f4973ad6-0053-4e9b-874b-9a060374655a\") " pod="openshift-marketplace/community-operators-n6942" Jan 26 00:24:44 crc kubenswrapper[5110]: I0126 00:24:44.534710 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4973ad6-0053-4e9b-874b-9a060374655a-catalog-content\") pod \"community-operators-n6942\" (UID: \"f4973ad6-0053-4e9b-874b-9a060374655a\") " pod="openshift-marketplace/community-operators-n6942" Jan 26 00:24:44 crc kubenswrapper[5110]: I0126 00:24:44.534743 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4973ad6-0053-4e9b-874b-9a060374655a-utilities\") pod \"community-operators-n6942\" (UID: \"f4973ad6-0053-4e9b-874b-9a060374655a\") " pod="openshift-marketplace/community-operators-n6942" Jan 26 00:24:44 crc kubenswrapper[5110]: I0126 00:24:44.535453 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4973ad6-0053-4e9b-874b-9a060374655a-utilities\") pod \"community-operators-n6942\" (UID: \"f4973ad6-0053-4e9b-874b-9a060374655a\") " pod="openshift-marketplace/community-operators-n6942" Jan 26 00:24:44 crc kubenswrapper[5110]: I0126 00:24:44.535746 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4973ad6-0053-4e9b-874b-9a060374655a-catalog-content\") pod \"community-operators-n6942\" (UID: \"f4973ad6-0053-4e9b-874b-9a060374655a\") " pod="openshift-marketplace/community-operators-n6942" Jan 26 00:24:44 crc kubenswrapper[5110]: I0126 00:24:44.565759 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw24k\" (UniqueName: \"kubernetes.io/projected/f4973ad6-0053-4e9b-874b-9a060374655a-kube-api-access-mw24k\") pod \"community-operators-n6942\" (UID: \"f4973ad6-0053-4e9b-874b-9a060374655a\") " pod="openshift-marketplace/community-operators-n6942" Jan 26 00:24:44 crc kubenswrapper[5110]: I0126 00:24:44.696182 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n6942" Jan 26 00:24:45 crc kubenswrapper[5110]: I0126 00:24:45.298299 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n6942"] Jan 26 00:24:46 crc kubenswrapper[5110]: I0126 00:24:46.199373 5110 generic.go:358] "Generic (PLEG): container finished" podID="f4973ad6-0053-4e9b-874b-9a060374655a" containerID="5c7787178f11957006fdf354bf6336fc913535e509ffa390c914593dfd621e45" exitCode=0 Jan 26 00:24:46 crc kubenswrapper[5110]: I0126 00:24:46.199506 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n6942" event={"ID":"f4973ad6-0053-4e9b-874b-9a060374655a","Type":"ContainerDied","Data":"5c7787178f11957006fdf354bf6336fc913535e509ffa390c914593dfd621e45"} Jan 26 00:24:46 crc kubenswrapper[5110]: I0126 00:24:46.200013 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n6942" event={"ID":"f4973ad6-0053-4e9b-874b-9a060374655a","Type":"ContainerStarted","Data":"f8dfea0c00793f94f00a83743e1cb63566e414db21ff480d96251758f3b3404d"} Jan 26 00:24:47 crc kubenswrapper[5110]: I0126 00:24:47.271512 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n6942" event={"ID":"f4973ad6-0053-4e9b-874b-9a060374655a","Type":"ContainerStarted","Data":"4423ca456ff00de977f2b5b38c64fd0642c1ce3a1133c7623100ed022af847f6"} Jan 26 00:24:48 crc kubenswrapper[5110]: I0126 00:24:48.281385 5110 generic.go:358] "Generic (PLEG): container finished" podID="f4973ad6-0053-4e9b-874b-9a060374655a" containerID="4423ca456ff00de977f2b5b38c64fd0642c1ce3a1133c7623100ed022af847f6" exitCode=0 Jan 26 00:24:48 crc kubenswrapper[5110]: I0126 00:24:48.281493 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n6942" event={"ID":"f4973ad6-0053-4e9b-874b-9a060374655a","Type":"ContainerDied","Data":"4423ca456ff00de977f2b5b38c64fd0642c1ce3a1133c7623100ed022af847f6"} Jan 26 00:24:49 crc kubenswrapper[5110]: I0126 00:24:49.290248 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n6942" event={"ID":"f4973ad6-0053-4e9b-874b-9a060374655a","Type":"ContainerStarted","Data":"6712cae0bd488e1060b9fa4e2f545bc6dd32ec8ddfd695392ad1d56e7dbb93bf"} Jan 26 00:24:49 crc kubenswrapper[5110]: I0126 00:24:49.311086 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n6942" podStartSLOduration=4.718319343 podStartE2EDuration="5.31106927s" podCreationTimestamp="2026-01-26 00:24:44 +0000 UTC" firstStartedPulling="2026-01-26 00:24:46.200778612 +0000 UTC m=+943.429677221" lastFinishedPulling="2026-01-26 00:24:46.793528539 +0000 UTC m=+944.022427148" observedRunningTime="2026-01-26 00:24:49.310264897 +0000 UTC m=+946.539163516" watchObservedRunningTime="2026-01-26 00:24:49.31106927 +0000 UTC m=+946.539967879" Jan 26 00:24:54 crc kubenswrapper[5110]: I0126 00:24:54.697452 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n6942" Jan 26 00:24:54 crc kubenswrapper[5110]: I0126 00:24:54.698486 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-n6942" Jan 26 00:24:54 crc kubenswrapper[5110]: I0126 00:24:54.749276 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n6942" Jan 26 00:24:55 crc kubenswrapper[5110]: I0126 00:24:55.381763 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n6942" Jan 26 00:24:55 crc kubenswrapper[5110]: I0126 00:24:55.432757 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n6942"] Jan 26 00:24:57 crc kubenswrapper[5110]: I0126 00:24:57.370111 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n6942" podUID="f4973ad6-0053-4e9b-874b-9a060374655a" containerName="registry-server" containerID="cri-o://6712cae0bd488e1060b9fa4e2f545bc6dd32ec8ddfd695392ad1d56e7dbb93bf" gracePeriod=2 Jan 26 00:24:58 crc kubenswrapper[5110]: I0126 00:24:58.382833 5110 generic.go:358] "Generic (PLEG): container finished" podID="f4973ad6-0053-4e9b-874b-9a060374655a" containerID="6712cae0bd488e1060b9fa4e2f545bc6dd32ec8ddfd695392ad1d56e7dbb93bf" exitCode=0 Jan 26 00:24:58 crc kubenswrapper[5110]: I0126 00:24:58.382919 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n6942" event={"ID":"f4973ad6-0053-4e9b-874b-9a060374655a","Type":"ContainerDied","Data":"6712cae0bd488e1060b9fa4e2f545bc6dd32ec8ddfd695392ad1d56e7dbb93bf"} Jan 26 00:24:58 crc kubenswrapper[5110]: I0126 00:24:58.500424 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n6942" Jan 26 00:24:58 crc kubenswrapper[5110]: I0126 00:24:58.521660 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4973ad6-0053-4e9b-874b-9a060374655a-catalog-content\") pod \"f4973ad6-0053-4e9b-874b-9a060374655a\" (UID: \"f4973ad6-0053-4e9b-874b-9a060374655a\") " Jan 26 00:24:58 crc kubenswrapper[5110]: I0126 00:24:58.521746 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4973ad6-0053-4e9b-874b-9a060374655a-utilities\") pod \"f4973ad6-0053-4e9b-874b-9a060374655a\" (UID: \"f4973ad6-0053-4e9b-874b-9a060374655a\") " Jan 26 00:24:58 crc kubenswrapper[5110]: I0126 00:24:58.521832 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mw24k\" (UniqueName: \"kubernetes.io/projected/f4973ad6-0053-4e9b-874b-9a060374655a-kube-api-access-mw24k\") pod \"f4973ad6-0053-4e9b-874b-9a060374655a\" (UID: \"f4973ad6-0053-4e9b-874b-9a060374655a\") " Jan 26 00:24:58 crc kubenswrapper[5110]: I0126 00:24:58.525981 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4973ad6-0053-4e9b-874b-9a060374655a-utilities" (OuterVolumeSpecName: "utilities") pod "f4973ad6-0053-4e9b-874b-9a060374655a" (UID: "f4973ad6-0053-4e9b-874b-9a060374655a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:24:58 crc kubenswrapper[5110]: I0126 00:24:58.545441 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4973ad6-0053-4e9b-874b-9a060374655a-kube-api-access-mw24k" (OuterVolumeSpecName: "kube-api-access-mw24k") pod "f4973ad6-0053-4e9b-874b-9a060374655a" (UID: "f4973ad6-0053-4e9b-874b-9a060374655a"). InnerVolumeSpecName "kube-api-access-mw24k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:24:58 crc kubenswrapper[5110]: I0126 00:24:58.587126 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4973ad6-0053-4e9b-874b-9a060374655a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4973ad6-0053-4e9b-874b-9a060374655a" (UID: "f4973ad6-0053-4e9b-874b-9a060374655a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:24:58 crc kubenswrapper[5110]: I0126 00:24:58.623362 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4973ad6-0053-4e9b-874b-9a060374655a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:58 crc kubenswrapper[5110]: I0126 00:24:58.623397 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4973ad6-0053-4e9b-874b-9a060374655a-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:58 crc kubenswrapper[5110]: I0126 00:24:58.623409 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mw24k\" (UniqueName: \"kubernetes.io/projected/f4973ad6-0053-4e9b-874b-9a060374655a-kube-api-access-mw24k\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:59 crc kubenswrapper[5110]: I0126 00:24:59.396442 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n6942" Jan 26 00:24:59 crc kubenswrapper[5110]: I0126 00:24:59.396400 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n6942" event={"ID":"f4973ad6-0053-4e9b-874b-9a060374655a","Type":"ContainerDied","Data":"f8dfea0c00793f94f00a83743e1cb63566e414db21ff480d96251758f3b3404d"} Jan 26 00:24:59 crc kubenswrapper[5110]: I0126 00:24:59.396618 5110 scope.go:117] "RemoveContainer" containerID="6712cae0bd488e1060b9fa4e2f545bc6dd32ec8ddfd695392ad1d56e7dbb93bf" Jan 26 00:24:59 crc kubenswrapper[5110]: I0126 00:24:59.432027 5110 scope.go:117] "RemoveContainer" containerID="4423ca456ff00de977f2b5b38c64fd0642c1ce3a1133c7623100ed022af847f6" Jan 26 00:24:59 crc kubenswrapper[5110]: I0126 00:24:59.438758 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n6942"] Jan 26 00:24:59 crc kubenswrapper[5110]: I0126 00:24:59.452844 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n6942"] Jan 26 00:24:59 crc kubenswrapper[5110]: I0126 00:24:59.453119 5110 scope.go:117] "RemoveContainer" containerID="5c7787178f11957006fdf354bf6336fc913535e509ffa390c914593dfd621e45" Jan 26 00:24:59 crc kubenswrapper[5110]: E0126 00:24:59.509752 5110 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4973ad6_0053_4e9b_874b_9a060374655a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4973ad6_0053_4e9b_874b_9a060374655a.slice/crio-f8dfea0c00793f94f00a83743e1cb63566e414db21ff480d96251758f3b3404d\": RecentStats: unable to find data in memory cache]" Jan 26 00:25:01 crc kubenswrapper[5110]: I0126 00:25:01.324585 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4973ad6-0053-4e9b-874b-9a060374655a" path="/var/lib/kubelet/pods/f4973ad6-0053-4e9b-874b-9a060374655a/volumes" Jan 26 00:25:23 crc kubenswrapper[5110]: I0126 00:25:23.734067 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_0ec8732b-c9f0-4b4e-8774-673da5c59114/docker-build/0.log" Jan 26 00:25:23 crc kubenswrapper[5110]: I0126 00:25:23.735680 5110 generic.go:358] "Generic (PLEG): container finished" podID="0ec8732b-c9f0-4b4e-8774-673da5c59114" containerID="b4ebdadf9f3e896c9682a5ddb2d5c40d92ae4051de3f5faf7b84e948d7819c6a" exitCode=1 Jan 26 00:25:23 crc kubenswrapper[5110]: I0126 00:25:23.735779 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"0ec8732b-c9f0-4b4e-8774-673da5c59114","Type":"ContainerDied","Data":"b4ebdadf9f3e896c9682a5ddb2d5c40d92ae4051de3f5faf7b84e948d7819c6a"} Jan 26 00:25:25 crc kubenswrapper[5110]: I0126 00:25:25.029276 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_0ec8732b-c9f0-4b4e-8774-673da5c59114/docker-build/0.log" Jan 26 00:25:25 crc kubenswrapper[5110]: I0126 00:25:25.030687 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:25 crc kubenswrapper[5110]: I0126 00:25:25.140137 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0ec8732b-c9f0-4b4e-8774-673da5c59114-build-system-configs\") pod \"0ec8732b-c9f0-4b4e-8774-673da5c59114\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " Jan 26 00:25:25 crc kubenswrapper[5110]: I0126 00:25:25.140209 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0ec8732b-c9f0-4b4e-8774-673da5c59114-node-pullsecrets\") pod \"0ec8732b-c9f0-4b4e-8774-673da5c59114\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " Jan 26 00:25:25 crc kubenswrapper[5110]: I0126 00:25:25.140276 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwktw\" (UniqueName: \"kubernetes.io/projected/0ec8732b-c9f0-4b4e-8774-673da5c59114-kube-api-access-pwktw\") pod \"0ec8732b-c9f0-4b4e-8774-673da5c59114\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " Jan 26 00:25:25 crc kubenswrapper[5110]: I0126 00:25:25.140310 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0ec8732b-c9f0-4b4e-8774-673da5c59114-build-blob-cache\") pod \"0ec8732b-c9f0-4b4e-8774-673da5c59114\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " Jan 26 00:25:25 crc kubenswrapper[5110]: I0126 00:25:25.140363 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0ec8732b-c9f0-4b4e-8774-673da5c59114-container-storage-root\") pod \"0ec8732b-c9f0-4b4e-8774-673da5c59114\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " Jan 26 00:25:25 crc kubenswrapper[5110]: I0126 00:25:25.140386 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0ec8732b-c9f0-4b4e-8774-673da5c59114-buildworkdir\") pod \"0ec8732b-c9f0-4b4e-8774-673da5c59114\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " Jan 26 00:25:25 crc kubenswrapper[5110]: I0126 00:25:25.140378 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec8732b-c9f0-4b4e-8774-673da5c59114-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "0ec8732b-c9f0-4b4e-8774-673da5c59114" (UID: "0ec8732b-c9f0-4b4e-8774-673da5c59114"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:25:25 crc kubenswrapper[5110]: I0126 00:25:25.140446 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec8732b-c9f0-4b4e-8774-673da5c59114-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "0ec8732b-c9f0-4b4e-8774-673da5c59114" (UID: "0ec8732b-c9f0-4b4e-8774-673da5c59114"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:25:25 crc kubenswrapper[5110]: I0126 00:25:25.140408 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0ec8732b-c9f0-4b4e-8774-673da5c59114-buildcachedir\") pod \"0ec8732b-c9f0-4b4e-8774-673da5c59114\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " Jan 26 00:25:25 crc kubenswrapper[5110]: I0126 00:25:25.140564 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ng4dn-pull\" (UniqueName: \"kubernetes.io/secret/0ec8732b-c9f0-4b4e-8774-673da5c59114-builder-dockercfg-ng4dn-pull\") pod \"0ec8732b-c9f0-4b4e-8774-673da5c59114\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " Jan 26 00:25:25 crc kubenswrapper[5110]: I0126 00:25:25.140625 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ec8732b-c9f0-4b4e-8774-673da5c59114-build-ca-bundles\") pod \"0ec8732b-c9f0-4b4e-8774-673da5c59114\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " Jan 26 00:25:25 crc kubenswrapper[5110]: I0126 00:25:25.140830 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ng4dn-push\" (UniqueName: \"kubernetes.io/secret/0ec8732b-c9f0-4b4e-8774-673da5c59114-builder-dockercfg-ng4dn-push\") pod \"0ec8732b-c9f0-4b4e-8774-673da5c59114\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " Jan 26 00:25:25 crc kubenswrapper[5110]: I0126 00:25:25.140937 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ec8732b-c9f0-4b4e-8774-673da5c59114-build-proxy-ca-bundles\") pod \"0ec8732b-c9f0-4b4e-8774-673da5c59114\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " Jan 26 00:25:25 crc kubenswrapper[5110]: I0126 00:25:25.141000 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0ec8732b-c9f0-4b4e-8774-673da5c59114-container-storage-run\") pod \"0ec8732b-c9f0-4b4e-8774-673da5c59114\" (UID: \"0ec8732b-c9f0-4b4e-8774-673da5c59114\") " Jan 26 00:25:25 crc kubenswrapper[5110]: I0126 00:25:25.141930 5110 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0ec8732b-c9f0-4b4e-8774-673da5c59114-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:25 crc kubenswrapper[5110]: I0126 00:25:25.141956 5110 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0ec8732b-c9f0-4b4e-8774-673da5c59114-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:25 crc kubenswrapper[5110]: I0126 00:25:25.141917 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ec8732b-c9f0-4b4e-8774-673da5c59114-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "0ec8732b-c9f0-4b4e-8774-673da5c59114" (UID: "0ec8732b-c9f0-4b4e-8774-673da5c59114"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:25:25 crc kubenswrapper[5110]: I0126 00:25:25.142003 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ec8732b-c9f0-4b4e-8774-673da5c59114-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "0ec8732b-c9f0-4b4e-8774-673da5c59114" (UID: "0ec8732b-c9f0-4b4e-8774-673da5c59114"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:25:25 crc kubenswrapper[5110]: I0126 00:25:25.142432 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ec8732b-c9f0-4b4e-8774-673da5c59114-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "0ec8732b-c9f0-4b4e-8774-673da5c59114" (UID: "0ec8732b-c9f0-4b4e-8774-673da5c59114"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:25:25 crc kubenswrapper[5110]: I0126 00:25:25.143429 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ec8732b-c9f0-4b4e-8774-673da5c59114-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "0ec8732b-c9f0-4b4e-8774-673da5c59114" (UID: "0ec8732b-c9f0-4b4e-8774-673da5c59114"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:25:25 crc kubenswrapper[5110]: I0126 00:25:25.149503 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ec8732b-c9f0-4b4e-8774-673da5c59114-builder-dockercfg-ng4dn-pull" (OuterVolumeSpecName: "builder-dockercfg-ng4dn-pull") pod "0ec8732b-c9f0-4b4e-8774-673da5c59114" (UID: "0ec8732b-c9f0-4b4e-8774-673da5c59114"). InnerVolumeSpecName "builder-dockercfg-ng4dn-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:25:25 crc kubenswrapper[5110]: I0126 00:25:25.151708 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ec8732b-c9f0-4b4e-8774-673da5c59114-builder-dockercfg-ng4dn-push" (OuterVolumeSpecName: "builder-dockercfg-ng4dn-push") pod "0ec8732b-c9f0-4b4e-8774-673da5c59114" (UID: "0ec8732b-c9f0-4b4e-8774-673da5c59114"). InnerVolumeSpecName "builder-dockercfg-ng4dn-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:25:25 crc kubenswrapper[5110]: I0126 00:25:25.152005 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ec8732b-c9f0-4b4e-8774-673da5c59114-kube-api-access-pwktw" (OuterVolumeSpecName: "kube-api-access-pwktw") pod "0ec8732b-c9f0-4b4e-8774-673da5c59114" (UID: "0ec8732b-c9f0-4b4e-8774-673da5c59114"). InnerVolumeSpecName "kube-api-access-pwktw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:25:25 crc kubenswrapper[5110]: I0126 00:25:25.183349 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ec8732b-c9f0-4b4e-8774-673da5c59114-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "0ec8732b-c9f0-4b4e-8774-673da5c59114" (UID: "0ec8732b-c9f0-4b4e-8774-673da5c59114"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:25:25 crc kubenswrapper[5110]: I0126 00:25:25.243581 5110 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ec8732b-c9f0-4b4e-8774-673da5c59114-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:25 crc kubenswrapper[5110]: I0126 00:25:25.243650 5110 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0ec8732b-c9f0-4b4e-8774-673da5c59114-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:25 crc kubenswrapper[5110]: I0126 00:25:25.243660 5110 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0ec8732b-c9f0-4b4e-8774-673da5c59114-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:25 crc kubenswrapper[5110]: I0126 00:25:25.243669 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pwktw\" (UniqueName: \"kubernetes.io/projected/0ec8732b-c9f0-4b4e-8774-673da5c59114-kube-api-access-pwktw\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:25 crc kubenswrapper[5110]: I0126 00:25:25.243681 5110 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0ec8732b-c9f0-4b4e-8774-673da5c59114-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:25 crc kubenswrapper[5110]: I0126 00:25:25.243693 5110 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-ng4dn-pull\" (UniqueName: \"kubernetes.io/secret/0ec8732b-c9f0-4b4e-8774-673da5c59114-builder-dockercfg-ng4dn-pull\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:25 crc kubenswrapper[5110]: I0126 00:25:25.243702 5110 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ec8732b-c9f0-4b4e-8774-673da5c59114-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:25 crc kubenswrapper[5110]: I0126 00:25:25.243711 5110 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-ng4dn-push\" (UniqueName: \"kubernetes.io/secret/0ec8732b-c9f0-4b4e-8774-673da5c59114-builder-dockercfg-ng4dn-push\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:25 crc kubenswrapper[5110]: I0126 00:25:25.350611 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ec8732b-c9f0-4b4e-8774-673da5c59114-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "0ec8732b-c9f0-4b4e-8774-673da5c59114" (UID: "0ec8732b-c9f0-4b4e-8774-673da5c59114"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:25:25 crc kubenswrapper[5110]: I0126 00:25:25.447133 5110 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0ec8732b-c9f0-4b4e-8774-673da5c59114-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:25 crc kubenswrapper[5110]: I0126 00:25:25.761938 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_0ec8732b-c9f0-4b4e-8774-673da5c59114/docker-build/0.log" Jan 26 00:25:25 crc kubenswrapper[5110]: I0126 00:25:25.774966 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"0ec8732b-c9f0-4b4e-8774-673da5c59114","Type":"ContainerDied","Data":"0cc444e611b483a35b5d31842c4f3502651d3920e50d7c6dfca5bd1f9d6b70f6"} Jan 26 00:25:25 crc kubenswrapper[5110]: I0126 00:25:25.775037 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cc444e611b483a35b5d31842c4f3502651d3920e50d7c6dfca5bd1f9d6b70f6" Jan 26 00:25:25 crc kubenswrapper[5110]: I0126 00:25:25.775157 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:27 crc kubenswrapper[5110]: I0126 00:25:27.159509 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ec8732b-c9f0-4b4e-8774-673da5c59114-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "0ec8732b-c9f0-4b4e-8774-673da5c59114" (UID: "0ec8732b-c9f0-4b4e-8774-673da5c59114"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:25:27 crc kubenswrapper[5110]: I0126 00:25:27.186884 5110 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0ec8732b-c9f0-4b4e-8774-673da5c59114-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.194488 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.195849 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0ec8732b-c9f0-4b4e-8774-673da5c59114" containerName="git-clone" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.195869 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec8732b-c9f0-4b4e-8774-673da5c59114" containerName="git-clone" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.195881 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f4973ad6-0053-4e9b-874b-9a060374655a" containerName="extract-content" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.195887 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4973ad6-0053-4e9b-874b-9a060374655a" containerName="extract-content" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.195899 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f4973ad6-0053-4e9b-874b-9a060374655a" containerName="extract-utilities" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.195909 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4973ad6-0053-4e9b-874b-9a060374655a" containerName="extract-utilities" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.195932 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f4973ad6-0053-4e9b-874b-9a060374655a" containerName="registry-server" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.195937 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4973ad6-0053-4e9b-874b-9a060374655a" containerName="registry-server" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.195948 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0ec8732b-c9f0-4b4e-8774-673da5c59114" containerName="manage-dockerfile" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.195954 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec8732b-c9f0-4b4e-8774-673da5c59114" containerName="manage-dockerfile" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.195963 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0ec8732b-c9f0-4b4e-8774-673da5c59114" containerName="docker-build" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.195969 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec8732b-c9f0-4b4e-8774-673da5c59114" containerName="docker-build" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.196090 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="0ec8732b-c9f0-4b4e-8774-673da5c59114" containerName="docker-build" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.196104 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="f4973ad6-0053-4e9b-874b-9a060374655a" containerName="registry-server" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.202776 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.205902 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-3-ca\"" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.206248 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-ng4dn\"" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.207915 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-3-sys-config\"" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.208035 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-3-global-ca\"" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.235352 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.428992 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f8838ea6-b825-4903-a386-edaf6e7372c6-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.429500 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f8838ea6-b825-4903-a386-edaf6e7372c6-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.429550 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f8838ea6-b825-4903-a386-edaf6e7372c6-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.429620 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f8838ea6-b825-4903-a386-edaf6e7372c6-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.429648 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f8838ea6-b825-4903-a386-edaf6e7372c6-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.429677 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f8838ea6-b825-4903-a386-edaf6e7372c6-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.429703 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d9pd\" (UniqueName: \"kubernetes.io/projected/f8838ea6-b825-4903-a386-edaf6e7372c6-kube-api-access-5d9pd\") pod \"service-telemetry-operator-3-build\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.430228 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ng4dn-pull\" (UniqueName: \"kubernetes.io/secret/f8838ea6-b825-4903-a386-edaf6e7372c6-builder-dockercfg-ng4dn-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.430310 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f8838ea6-b825-4903-a386-edaf6e7372c6-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.430360 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f8838ea6-b825-4903-a386-edaf6e7372c6-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.430387 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f8838ea6-b825-4903-a386-edaf6e7372c6-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.430415 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ng4dn-push\" (UniqueName: \"kubernetes.io/secret/f8838ea6-b825-4903-a386-edaf6e7372c6-builder-dockercfg-ng4dn-push\") pod \"service-telemetry-operator-3-build\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.532035 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f8838ea6-b825-4903-a386-edaf6e7372c6-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.532626 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f8838ea6-b825-4903-a386-edaf6e7372c6-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.532557 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f8838ea6-b825-4903-a386-edaf6e7372c6-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.532822 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f8838ea6-b825-4903-a386-edaf6e7372c6-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.532874 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5d9pd\" (UniqueName: \"kubernetes.io/projected/f8838ea6-b825-4903-a386-edaf6e7372c6-kube-api-access-5d9pd\") pod \"service-telemetry-operator-3-build\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.532898 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ng4dn-pull\" (UniqueName: \"kubernetes.io/secret/f8838ea6-b825-4903-a386-edaf6e7372c6-builder-dockercfg-ng4dn-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.532941 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f8838ea6-b825-4903-a386-edaf6e7372c6-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.532980 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f8838ea6-b825-4903-a386-edaf6e7372c6-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.533001 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f8838ea6-b825-4903-a386-edaf6e7372c6-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.533020 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ng4dn-push\" (UniqueName: \"kubernetes.io/secret/f8838ea6-b825-4903-a386-edaf6e7372c6-builder-dockercfg-ng4dn-push\") pod \"service-telemetry-operator-3-build\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.533094 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f8838ea6-b825-4903-a386-edaf6e7372c6-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.533168 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f8838ea6-b825-4903-a386-edaf6e7372c6-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.533176 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f8838ea6-b825-4903-a386-edaf6e7372c6-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.533202 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f8838ea6-b825-4903-a386-edaf6e7372c6-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.533449 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f8838ea6-b825-4903-a386-edaf6e7372c6-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.534057 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f8838ea6-b825-4903-a386-edaf6e7372c6-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.534179 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f8838ea6-b825-4903-a386-edaf6e7372c6-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.534243 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f8838ea6-b825-4903-a386-edaf6e7372c6-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.534706 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f8838ea6-b825-4903-a386-edaf6e7372c6-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.534740 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f8838ea6-b825-4903-a386-edaf6e7372c6-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.534833 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f8838ea6-b825-4903-a386-edaf6e7372c6-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.542147 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ng4dn-push\" (UniqueName: \"kubernetes.io/secret/f8838ea6-b825-4903-a386-edaf6e7372c6-builder-dockercfg-ng4dn-push\") pod \"service-telemetry-operator-3-build\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.542177 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ng4dn-pull\" (UniqueName: \"kubernetes.io/secret/f8838ea6-b825-4903-a386-edaf6e7372c6-builder-dockercfg-ng4dn-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.556054 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d9pd\" (UniqueName: \"kubernetes.io/projected/f8838ea6-b825-4903-a386-edaf6e7372c6-kube-api-access-5d9pd\") pod \"service-telemetry-operator-3-build\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:36 crc kubenswrapper[5110]: I0126 00:25:36.826674 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:37 crc kubenswrapper[5110]: I0126 00:25:37.145979 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Jan 26 00:25:37 crc kubenswrapper[5110]: I0126 00:25:37.154642 5110 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 00:25:37 crc kubenswrapper[5110]: I0126 00:25:37.889147 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"f8838ea6-b825-4903-a386-edaf6e7372c6","Type":"ContainerStarted","Data":"0a7b021cc65e8be72e83645daeecf889abf7d0d468b94ba31690bd0686dc0b34"} Jan 26 00:25:37 crc kubenswrapper[5110]: I0126 00:25:37.889695 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"f8838ea6-b825-4903-a386-edaf6e7372c6","Type":"ContainerStarted","Data":"91499f9ef592f1d738ac8dd42e030de2a92424a97568b572f161a47e1d76f57e"} Jan 26 00:25:46 crc kubenswrapper[5110]: I0126 00:25:46.981415 5110 generic.go:358] "Generic (PLEG): container finished" podID="f8838ea6-b825-4903-a386-edaf6e7372c6" containerID="0a7b021cc65e8be72e83645daeecf889abf7d0d468b94ba31690bd0686dc0b34" exitCode=0 Jan 26 00:25:46 crc kubenswrapper[5110]: I0126 00:25:46.981521 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"f8838ea6-b825-4903-a386-edaf6e7372c6","Type":"ContainerDied","Data":"0a7b021cc65e8be72e83645daeecf889abf7d0d468b94ba31690bd0686dc0b34"} Jan 26 00:25:47 crc kubenswrapper[5110]: I0126 00:25:47.995439 5110 generic.go:358] "Generic (PLEG): container finished" podID="f8838ea6-b825-4903-a386-edaf6e7372c6" containerID="9bd9eaf6a9e8927478b7a8e67add7e6e5af8735136e7b90af11453a686199662" exitCode=0 Jan 26 00:25:47 crc kubenswrapper[5110]: I0126 00:25:47.995542 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"f8838ea6-b825-4903-a386-edaf6e7372c6","Type":"ContainerDied","Data":"9bd9eaf6a9e8927478b7a8e67add7e6e5af8735136e7b90af11453a686199662"} Jan 26 00:25:48 crc kubenswrapper[5110]: I0126 00:25:48.047077 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_f8838ea6-b825-4903-a386-edaf6e7372c6/manage-dockerfile/0.log" Jan 26 00:25:49 crc kubenswrapper[5110]: I0126 00:25:49.015425 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"f8838ea6-b825-4903-a386-edaf6e7372c6","Type":"ContainerStarted","Data":"209c96540c044d8e3439b770b1ad6b8e0a787745e58a14680cf5974df887ce37"} Jan 26 00:25:49 crc kubenswrapper[5110]: I0126 00:25:49.073932 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-3-build" podStartSLOduration=13.073876359 podStartE2EDuration="13.073876359s" podCreationTimestamp="2026-01-26 00:25:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:25:49.061877878 +0000 UTC m=+1006.290776487" watchObservedRunningTime="2026-01-26 00:25:49.073876359 +0000 UTC m=+1006.302775008" Jan 26 00:26:00 crc kubenswrapper[5110]: I0126 00:26:00.154215 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489786-25xw5"] Jan 26 00:26:00 crc kubenswrapper[5110]: I0126 00:26:00.168815 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489786-25xw5"] Jan 26 00:26:00 crc kubenswrapper[5110]: I0126 00:26:00.168989 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489786-25xw5" Jan 26 00:26:00 crc kubenswrapper[5110]: I0126 00:26:00.175117 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:26:00 crc kubenswrapper[5110]: I0126 00:26:00.175299 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:26:00 crc kubenswrapper[5110]: I0126 00:26:00.175992 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dfffg\"" Jan 26 00:26:00 crc kubenswrapper[5110]: I0126 00:26:00.276727 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtkpm\" (UniqueName: \"kubernetes.io/projected/9242d7cd-65e0-49e7-9d89-dad7e4b1f8ec-kube-api-access-qtkpm\") pod \"auto-csr-approver-29489786-25xw5\" (UID: \"9242d7cd-65e0-49e7-9d89-dad7e4b1f8ec\") " pod="openshift-infra/auto-csr-approver-29489786-25xw5" Jan 26 00:26:00 crc kubenswrapper[5110]: I0126 00:26:00.378745 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qtkpm\" (UniqueName: \"kubernetes.io/projected/9242d7cd-65e0-49e7-9d89-dad7e4b1f8ec-kube-api-access-qtkpm\") pod \"auto-csr-approver-29489786-25xw5\" (UID: \"9242d7cd-65e0-49e7-9d89-dad7e4b1f8ec\") " pod="openshift-infra/auto-csr-approver-29489786-25xw5" Jan 26 00:26:00 crc kubenswrapper[5110]: I0126 00:26:00.417601 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtkpm\" (UniqueName: \"kubernetes.io/projected/9242d7cd-65e0-49e7-9d89-dad7e4b1f8ec-kube-api-access-qtkpm\") pod \"auto-csr-approver-29489786-25xw5\" (UID: \"9242d7cd-65e0-49e7-9d89-dad7e4b1f8ec\") " pod="openshift-infra/auto-csr-approver-29489786-25xw5" Jan 26 00:26:00 crc kubenswrapper[5110]: I0126 00:26:00.504343 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489786-25xw5" Jan 26 00:26:00 crc kubenswrapper[5110]: I0126 00:26:00.791385 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489786-25xw5"] Jan 26 00:26:01 crc kubenswrapper[5110]: I0126 00:26:01.122422 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489786-25xw5" event={"ID":"9242d7cd-65e0-49e7-9d89-dad7e4b1f8ec","Type":"ContainerStarted","Data":"c9a6e39218e322937b17864472e42de5a6674273b21ed8745d4bc0bc515cb1d6"} Jan 26 00:26:03 crc kubenswrapper[5110]: I0126 00:26:03.141551 5110 generic.go:358] "Generic (PLEG): container finished" podID="9242d7cd-65e0-49e7-9d89-dad7e4b1f8ec" containerID="56f1b3738e12c88ec58398333d5aef5efd8960491746aa1b41af72c5e09afefe" exitCode=0 Jan 26 00:26:03 crc kubenswrapper[5110]: I0126 00:26:03.141674 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489786-25xw5" event={"ID":"9242d7cd-65e0-49e7-9d89-dad7e4b1f8ec","Type":"ContainerDied","Data":"56f1b3738e12c88ec58398333d5aef5efd8960491746aa1b41af72c5e09afefe"} Jan 26 00:26:04 crc kubenswrapper[5110]: I0126 00:26:04.454279 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489786-25xw5" Jan 26 00:26:04 crc kubenswrapper[5110]: I0126 00:26:04.539446 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtkpm\" (UniqueName: \"kubernetes.io/projected/9242d7cd-65e0-49e7-9d89-dad7e4b1f8ec-kube-api-access-qtkpm\") pod \"9242d7cd-65e0-49e7-9d89-dad7e4b1f8ec\" (UID: \"9242d7cd-65e0-49e7-9d89-dad7e4b1f8ec\") " Jan 26 00:26:04 crc kubenswrapper[5110]: I0126 00:26:04.546445 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9242d7cd-65e0-49e7-9d89-dad7e4b1f8ec-kube-api-access-qtkpm" (OuterVolumeSpecName: "kube-api-access-qtkpm") pod "9242d7cd-65e0-49e7-9d89-dad7e4b1f8ec" (UID: "9242d7cd-65e0-49e7-9d89-dad7e4b1f8ec"). InnerVolumeSpecName "kube-api-access-qtkpm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:26:04 crc kubenswrapper[5110]: I0126 00:26:04.641919 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qtkpm\" (UniqueName: \"kubernetes.io/projected/9242d7cd-65e0-49e7-9d89-dad7e4b1f8ec-kube-api-access-qtkpm\") on node \"crc\" DevicePath \"\"" Jan 26 00:26:05 crc kubenswrapper[5110]: I0126 00:26:05.156754 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489786-25xw5" event={"ID":"9242d7cd-65e0-49e7-9d89-dad7e4b1f8ec","Type":"ContainerDied","Data":"c9a6e39218e322937b17864472e42de5a6674273b21ed8745d4bc0bc515cb1d6"} Jan 26 00:26:05 crc kubenswrapper[5110]: I0126 00:26:05.156820 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9a6e39218e322937b17864472e42de5a6674273b21ed8745d4bc0bc515cb1d6" Jan 26 00:26:05 crc kubenswrapper[5110]: I0126 00:26:05.156932 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489786-25xw5" Jan 26 00:26:05 crc kubenswrapper[5110]: I0126 00:26:05.516040 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29489780-n2kl5"] Jan 26 00:26:05 crc kubenswrapper[5110]: I0126 00:26:05.519529 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29489780-n2kl5"] Jan 26 00:26:07 crc kubenswrapper[5110]: I0126 00:26:07.329218 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="616e145b-2b1b-40bf-94a6-a54da571e102" path="/var/lib/kubelet/pods/616e145b-2b1b-40bf-94a6-a54da571e102/volumes" Jan 26 00:26:16 crc kubenswrapper[5110]: I0126 00:26:16.612291 5110 scope.go:117] "RemoveContainer" containerID="fcd41525cdd600223dfb1d80d7f6df5a8805e634339d46bccde54adf573ee530" Jan 26 00:26:56 crc kubenswrapper[5110]: I0126 00:26:56.812576 5110 patch_prober.go:28] interesting pod/machine-config-daemon-c6tpr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:26:56 crc kubenswrapper[5110]: I0126 00:26:56.813461 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" podUID="f15bed73-d669-439f-9828-7b952d9bfe65" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:27:02 crc kubenswrapper[5110]: I0126 00:27:02.991141 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_f8838ea6-b825-4903-a386-edaf6e7372c6/docker-build/0.log" Jan 26 00:27:02 crc kubenswrapper[5110]: I0126 00:27:02.993622 5110 generic.go:358] "Generic (PLEG): container finished" podID="f8838ea6-b825-4903-a386-edaf6e7372c6" containerID="209c96540c044d8e3439b770b1ad6b8e0a787745e58a14680cf5974df887ce37" exitCode=1 Jan 26 00:27:02 crc kubenswrapper[5110]: I0126 00:27:02.993762 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"f8838ea6-b825-4903-a386-edaf6e7372c6","Type":"ContainerDied","Data":"209c96540c044d8e3439b770b1ad6b8e0a787745e58a14680cf5974df887ce37"} Jan 26 00:27:04 crc kubenswrapper[5110]: I0126 00:27:04.285431 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_f8838ea6-b825-4903-a386-edaf6e7372c6/docker-build/0.log" Jan 26 00:27:04 crc kubenswrapper[5110]: I0126 00:27:04.286788 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:27:04 crc kubenswrapper[5110]: I0126 00:27:04.455669 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ng4dn-push\" (UniqueName: \"kubernetes.io/secret/f8838ea6-b825-4903-a386-edaf6e7372c6-builder-dockercfg-ng4dn-push\") pod \"f8838ea6-b825-4903-a386-edaf6e7372c6\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " Jan 26 00:27:04 crc kubenswrapper[5110]: I0126 00:27:04.455872 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f8838ea6-b825-4903-a386-edaf6e7372c6-build-blob-cache\") pod \"f8838ea6-b825-4903-a386-edaf6e7372c6\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " Jan 26 00:27:04 crc kubenswrapper[5110]: I0126 00:27:04.456089 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f8838ea6-b825-4903-a386-edaf6e7372c6-buildworkdir\") pod \"f8838ea6-b825-4903-a386-edaf6e7372c6\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " Jan 26 00:27:04 crc kubenswrapper[5110]: I0126 00:27:04.456259 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f8838ea6-b825-4903-a386-edaf6e7372c6-build-ca-bundles\") pod \"f8838ea6-b825-4903-a386-edaf6e7372c6\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " Jan 26 00:27:04 crc kubenswrapper[5110]: I0126 00:27:04.456296 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f8838ea6-b825-4903-a386-edaf6e7372c6-buildcachedir\") pod \"f8838ea6-b825-4903-a386-edaf6e7372c6\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " Jan 26 00:27:04 crc kubenswrapper[5110]: I0126 00:27:04.456330 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f8838ea6-b825-4903-a386-edaf6e7372c6-build-system-configs\") pod \"f8838ea6-b825-4903-a386-edaf6e7372c6\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " Jan 26 00:27:04 crc kubenswrapper[5110]: I0126 00:27:04.456369 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f8838ea6-b825-4903-a386-edaf6e7372c6-container-storage-root\") pod \"f8838ea6-b825-4903-a386-edaf6e7372c6\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " Jan 26 00:27:04 crc kubenswrapper[5110]: I0126 00:27:04.456426 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f8838ea6-b825-4903-a386-edaf6e7372c6-container-storage-run\") pod \"f8838ea6-b825-4903-a386-edaf6e7372c6\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " Jan 26 00:27:04 crc kubenswrapper[5110]: I0126 00:27:04.456484 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5d9pd\" (UniqueName: \"kubernetes.io/projected/f8838ea6-b825-4903-a386-edaf6e7372c6-kube-api-access-5d9pd\") pod \"f8838ea6-b825-4903-a386-edaf6e7372c6\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " Jan 26 00:27:04 crc kubenswrapper[5110]: I0126 00:27:04.456473 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8838ea6-b825-4903-a386-edaf6e7372c6-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "f8838ea6-b825-4903-a386-edaf6e7372c6" (UID: "f8838ea6-b825-4903-a386-edaf6e7372c6"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:27:04 crc kubenswrapper[5110]: I0126 00:27:04.456550 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8838ea6-b825-4903-a386-edaf6e7372c6-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "f8838ea6-b825-4903-a386-edaf6e7372c6" (UID: "f8838ea6-b825-4903-a386-edaf6e7372c6"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:27:04 crc kubenswrapper[5110]: I0126 00:27:04.456522 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f8838ea6-b825-4903-a386-edaf6e7372c6-node-pullsecrets\") pod \"f8838ea6-b825-4903-a386-edaf6e7372c6\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " Jan 26 00:27:04 crc kubenswrapper[5110]: I0126 00:27:04.456750 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ng4dn-pull\" (UniqueName: \"kubernetes.io/secret/f8838ea6-b825-4903-a386-edaf6e7372c6-builder-dockercfg-ng4dn-pull\") pod \"f8838ea6-b825-4903-a386-edaf6e7372c6\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " Jan 26 00:27:04 crc kubenswrapper[5110]: I0126 00:27:04.456846 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f8838ea6-b825-4903-a386-edaf6e7372c6-build-proxy-ca-bundles\") pod \"f8838ea6-b825-4903-a386-edaf6e7372c6\" (UID: \"f8838ea6-b825-4903-a386-edaf6e7372c6\") " Jan 26 00:27:04 crc kubenswrapper[5110]: I0126 00:27:04.457901 5110 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f8838ea6-b825-4903-a386-edaf6e7372c6-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:04 crc kubenswrapper[5110]: I0126 00:27:04.457951 5110 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f8838ea6-b825-4903-a386-edaf6e7372c6-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:04 crc kubenswrapper[5110]: I0126 00:27:04.458391 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8838ea6-b825-4903-a386-edaf6e7372c6-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "f8838ea6-b825-4903-a386-edaf6e7372c6" (UID: "f8838ea6-b825-4903-a386-edaf6e7372c6"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:27:04 crc kubenswrapper[5110]: I0126 00:27:04.458448 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8838ea6-b825-4903-a386-edaf6e7372c6-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "f8838ea6-b825-4903-a386-edaf6e7372c6" (UID: "f8838ea6-b825-4903-a386-edaf6e7372c6"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:27:04 crc kubenswrapper[5110]: I0126 00:27:04.460099 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8838ea6-b825-4903-a386-edaf6e7372c6-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "f8838ea6-b825-4903-a386-edaf6e7372c6" (UID: "f8838ea6-b825-4903-a386-edaf6e7372c6"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:27:04 crc kubenswrapper[5110]: I0126 00:27:04.466225 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8838ea6-b825-4903-a386-edaf6e7372c6-kube-api-access-5d9pd" (OuterVolumeSpecName: "kube-api-access-5d9pd") pod "f8838ea6-b825-4903-a386-edaf6e7372c6" (UID: "f8838ea6-b825-4903-a386-edaf6e7372c6"). InnerVolumeSpecName "kube-api-access-5d9pd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:27:04 crc kubenswrapper[5110]: I0126 00:27:04.467092 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8838ea6-b825-4903-a386-edaf6e7372c6-builder-dockercfg-ng4dn-pull" (OuterVolumeSpecName: "builder-dockercfg-ng4dn-pull") pod "f8838ea6-b825-4903-a386-edaf6e7372c6" (UID: "f8838ea6-b825-4903-a386-edaf6e7372c6"). InnerVolumeSpecName "builder-dockercfg-ng4dn-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:27:04 crc kubenswrapper[5110]: I0126 00:27:04.469324 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8838ea6-b825-4903-a386-edaf6e7372c6-builder-dockercfg-ng4dn-push" (OuterVolumeSpecName: "builder-dockercfg-ng4dn-push") pod "f8838ea6-b825-4903-a386-edaf6e7372c6" (UID: "f8838ea6-b825-4903-a386-edaf6e7372c6"). InnerVolumeSpecName "builder-dockercfg-ng4dn-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:27:04 crc kubenswrapper[5110]: I0126 00:27:04.475034 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8838ea6-b825-4903-a386-edaf6e7372c6-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "f8838ea6-b825-4903-a386-edaf6e7372c6" (UID: "f8838ea6-b825-4903-a386-edaf6e7372c6"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:27:04 crc kubenswrapper[5110]: I0126 00:27:04.559436 5110 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f8838ea6-b825-4903-a386-edaf6e7372c6-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:04 crc kubenswrapper[5110]: I0126 00:27:04.559475 5110 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f8838ea6-b825-4903-a386-edaf6e7372c6-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:04 crc kubenswrapper[5110]: I0126 00:27:04.559489 5110 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f8838ea6-b825-4903-a386-edaf6e7372c6-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:04 crc kubenswrapper[5110]: I0126 00:27:04.559501 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5d9pd\" (UniqueName: \"kubernetes.io/projected/f8838ea6-b825-4903-a386-edaf6e7372c6-kube-api-access-5d9pd\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:04 crc kubenswrapper[5110]: I0126 00:27:04.559513 5110 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-ng4dn-pull\" (UniqueName: \"kubernetes.io/secret/f8838ea6-b825-4903-a386-edaf6e7372c6-builder-dockercfg-ng4dn-pull\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:04 crc kubenswrapper[5110]: I0126 00:27:04.559525 5110 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f8838ea6-b825-4903-a386-edaf6e7372c6-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:04 crc kubenswrapper[5110]: I0126 00:27:04.559536 5110 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-ng4dn-push\" (UniqueName: \"kubernetes.io/secret/f8838ea6-b825-4903-a386-edaf6e7372c6-builder-dockercfg-ng4dn-push\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:04 crc kubenswrapper[5110]: I0126 00:27:04.581594 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8838ea6-b825-4903-a386-edaf6e7372c6-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "f8838ea6-b825-4903-a386-edaf6e7372c6" (UID: "f8838ea6-b825-4903-a386-edaf6e7372c6"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:27:04 crc kubenswrapper[5110]: I0126 00:27:04.660312 5110 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f8838ea6-b825-4903-a386-edaf6e7372c6-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:04 crc kubenswrapper[5110]: I0126 00:27:04.778683 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8838ea6-b825-4903-a386-edaf6e7372c6-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "f8838ea6-b825-4903-a386-edaf6e7372c6" (UID: "f8838ea6-b825-4903-a386-edaf6e7372c6"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:27:04 crc kubenswrapper[5110]: I0126 00:27:04.862788 5110 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f8838ea6-b825-4903-a386-edaf6e7372c6-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:05 crc kubenswrapper[5110]: I0126 00:27:05.009402 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_f8838ea6-b825-4903-a386-edaf6e7372c6/docker-build/0.log" Jan 26 00:27:05 crc kubenswrapper[5110]: I0126 00:27:05.010607 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:27:05 crc kubenswrapper[5110]: I0126 00:27:05.010681 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"f8838ea6-b825-4903-a386-edaf6e7372c6","Type":"ContainerDied","Data":"91499f9ef592f1d738ac8dd42e030de2a92424a97568b572f161a47e1d76f57e"} Jan 26 00:27:05 crc kubenswrapper[5110]: I0126 00:27:05.010768 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91499f9ef592f1d738ac8dd42e030de2a92424a97568b572f161a47e1d76f57e" Jan 26 00:27:06 crc kubenswrapper[5110]: I0126 00:27:06.608674 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8838ea6-b825-4903-a386-edaf6e7372c6-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "f8838ea6-b825-4903-a386-edaf6e7372c6" (UID: "f8838ea6-b825-4903-a386-edaf6e7372c6"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:27:06 crc kubenswrapper[5110]: I0126 00:27:06.692770 5110 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f8838ea6-b825-4903-a386-edaf6e7372c6-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.351344 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.352637 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f8838ea6-b825-4903-a386-edaf6e7372c6" containerName="docker-build" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.352660 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8838ea6-b825-4903-a386-edaf6e7372c6" containerName="docker-build" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.352676 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9242d7cd-65e0-49e7-9d89-dad7e4b1f8ec" containerName="oc" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.352683 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="9242d7cd-65e0-49e7-9d89-dad7e4b1f8ec" containerName="oc" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.352705 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f8838ea6-b825-4903-a386-edaf6e7372c6" containerName="git-clone" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.352712 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8838ea6-b825-4903-a386-edaf6e7372c6" containerName="git-clone" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.352732 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f8838ea6-b825-4903-a386-edaf6e7372c6" containerName="manage-dockerfile" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.352740 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8838ea6-b825-4903-a386-edaf6e7372c6" containerName="manage-dockerfile" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.352910 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="9242d7cd-65e0-49e7-9d89-dad7e4b1f8ec" containerName="oc" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.352928 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="f8838ea6-b825-4903-a386-edaf6e7372c6" containerName="docker-build" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.365718 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.380712 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-4-sys-config\"" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.381086 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-4-global-ca\"" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.381086 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-4-ca\"" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.384447 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-ng4dn\"" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.390821 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.451436 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.451535 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.451575 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.451602 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ng4dn-push\" (UniqueName: \"kubernetes.io/secret/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-builder-dockercfg-ng4dn-push\") pod \"service-telemetry-operator-4-build\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.451832 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.451873 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.451898 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ng4dn-pull\" (UniqueName: \"kubernetes.io/secret/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-builder-dockercfg-ng4dn-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.451970 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.452005 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.452034 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.452067 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4nrd\" (UniqueName: \"kubernetes.io/projected/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-kube-api-access-j4nrd\") pod \"service-telemetry-operator-4-build\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.452090 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.553356 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.553414 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.553442 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ng4dn-pull\" (UniqueName: \"kubernetes.io/secret/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-builder-dockercfg-ng4dn-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.553483 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.553508 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.553546 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.553586 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j4nrd\" (UniqueName: \"kubernetes.io/projected/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-kube-api-access-j4nrd\") pod \"service-telemetry-operator-4-build\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.553616 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.553965 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.554036 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.554056 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.554047 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.554287 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ng4dn-push\" (UniqueName: \"kubernetes.io/secret/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-builder-dockercfg-ng4dn-push\") pod \"service-telemetry-operator-4-build\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.554740 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.555199 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.555601 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.555883 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.556031 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.556080 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.556087 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.556229 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.563082 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ng4dn-push\" (UniqueName: \"kubernetes.io/secret/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-builder-dockercfg-ng4dn-push\") pod \"service-telemetry-operator-4-build\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.563242 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ng4dn-pull\" (UniqueName: \"kubernetes.io/secret/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-builder-dockercfg-ng4dn-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.571293 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4nrd\" (UniqueName: \"kubernetes.io/projected/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-kube-api-access-j4nrd\") pod \"service-telemetry-operator-4-build\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:15 crc kubenswrapper[5110]: I0126 00:27:15.708270 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:16 crc kubenswrapper[5110]: I0126 00:27:16.078002 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Jan 26 00:27:16 crc kubenswrapper[5110]: I0126 00:27:16.102383 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf","Type":"ContainerStarted","Data":"1febe7528c1a841567eee8f4fd93a0b77f33a96b65648c6610870e34462ea59a"} Jan 26 00:27:17 crc kubenswrapper[5110]: I0126 00:27:17.112895 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf","Type":"ContainerStarted","Data":"6994100f53b0226b69b68c7f18ecf7307042ee5099a9f62cfeaae6065c0cc256"} Jan 26 00:27:26 crc kubenswrapper[5110]: I0126 00:27:26.813438 5110 patch_prober.go:28] interesting pod/machine-config-daemon-c6tpr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:27:26 crc kubenswrapper[5110]: I0126 00:27:26.815942 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" podUID="f15bed73-d669-439f-9828-7b952d9bfe65" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:27:27 crc kubenswrapper[5110]: I0126 00:27:27.221574 5110 generic.go:358] "Generic (PLEG): container finished" podID="2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf" containerID="6994100f53b0226b69b68c7f18ecf7307042ee5099a9f62cfeaae6065c0cc256" exitCode=0 Jan 26 00:27:27 crc kubenswrapper[5110]: I0126 00:27:27.221648 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf","Type":"ContainerDied","Data":"6994100f53b0226b69b68c7f18ecf7307042ee5099a9f62cfeaae6065c0cc256"} Jan 26 00:27:28 crc kubenswrapper[5110]: I0126 00:27:28.232328 5110 generic.go:358] "Generic (PLEG): container finished" podID="2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf" containerID="9873a8f78650f76a792b4da459d651d27dcaa23d5270606111cea04e1c5c4fdb" exitCode=0 Jan 26 00:27:28 crc kubenswrapper[5110]: I0126 00:27:28.232438 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf","Type":"ContainerDied","Data":"9873a8f78650f76a792b4da459d651d27dcaa23d5270606111cea04e1c5c4fdb"} Jan 26 00:27:28 crc kubenswrapper[5110]: I0126 00:27:28.277115 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf/manage-dockerfile/0.log" Jan 26 00:27:29 crc kubenswrapper[5110]: I0126 00:27:29.246100 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf","Type":"ContainerStarted","Data":"8dde9018e28b8048721db73aa19b25a004e606fdbbb7ffc6c5e0ee584caa4421"} Jan 26 00:27:29 crc kubenswrapper[5110]: I0126 00:27:29.290466 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-4-build" podStartSLOduration=14.290437264 podStartE2EDuration="14.290437264s" podCreationTimestamp="2026-01-26 00:27:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:27:29.285400171 +0000 UTC m=+1106.514298780" watchObservedRunningTime="2026-01-26 00:27:29.290437264 +0000 UTC m=+1106.519335913" Jan 26 00:27:56 crc kubenswrapper[5110]: I0126 00:27:56.813303 5110 patch_prober.go:28] interesting pod/machine-config-daemon-c6tpr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:27:56 crc kubenswrapper[5110]: I0126 00:27:56.814529 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" podUID="f15bed73-d669-439f-9828-7b952d9bfe65" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:27:56 crc kubenswrapper[5110]: I0126 00:27:56.814685 5110 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" Jan 26 00:27:56 crc kubenswrapper[5110]: I0126 00:27:56.815890 5110 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"85c836c21afd57f854e547aa21c8b990ebf524babd5afa313958f93419d014e7"} pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 00:27:56 crc kubenswrapper[5110]: I0126 00:27:56.816002 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" podUID="f15bed73-d669-439f-9828-7b952d9bfe65" containerName="machine-config-daemon" containerID="cri-o://85c836c21afd57f854e547aa21c8b990ebf524babd5afa313958f93419d014e7" gracePeriod=600 Jan 26 00:27:57 crc kubenswrapper[5110]: I0126 00:27:57.528146 5110 generic.go:358] "Generic (PLEG): container finished" podID="f15bed73-d669-439f-9828-7b952d9bfe65" containerID="85c836c21afd57f854e547aa21c8b990ebf524babd5afa313958f93419d014e7" exitCode=0 Jan 26 00:27:57 crc kubenswrapper[5110]: I0126 00:27:57.528730 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" event={"ID":"f15bed73-d669-439f-9828-7b952d9bfe65","Type":"ContainerDied","Data":"85c836c21afd57f854e547aa21c8b990ebf524babd5afa313958f93419d014e7"} Jan 26 00:27:57 crc kubenswrapper[5110]: I0126 00:27:57.528784 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" event={"ID":"f15bed73-d669-439f-9828-7b952d9bfe65","Type":"ContainerStarted","Data":"6378941edf2a5c6b68ade18a39cf48e1e5d7713758a77cf9b8edc9cdb2b220a2"} Jan 26 00:27:57 crc kubenswrapper[5110]: I0126 00:27:57.528818 5110 scope.go:117] "RemoveContainer" containerID="f81b2a95bda7dc33d66a95aaff56fe2d3c57ed6f0906b6eb9d8b3b10b83a4ccf" Jan 26 00:28:00 crc kubenswrapper[5110]: I0126 00:28:00.158910 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489788-hq95m"] Jan 26 00:28:00 crc kubenswrapper[5110]: I0126 00:28:00.168781 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489788-hq95m" Jan 26 00:28:00 crc kubenswrapper[5110]: I0126 00:28:00.172669 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:28:00 crc kubenswrapper[5110]: I0126 00:28:00.176168 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:28:00 crc kubenswrapper[5110]: I0126 00:28:00.176221 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dfffg\"" Jan 26 00:28:00 crc kubenswrapper[5110]: I0126 00:28:00.185431 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489788-hq95m"] Jan 26 00:28:00 crc kubenswrapper[5110]: I0126 00:28:00.269743 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv4w5\" (UniqueName: \"kubernetes.io/projected/e125fde3-1b33-4b6d-92ee-bdeada9c1202-kube-api-access-nv4w5\") pod \"auto-csr-approver-29489788-hq95m\" (UID: \"e125fde3-1b33-4b6d-92ee-bdeada9c1202\") " pod="openshift-infra/auto-csr-approver-29489788-hq95m" Jan 26 00:28:00 crc kubenswrapper[5110]: I0126 00:28:00.371620 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nv4w5\" (UniqueName: \"kubernetes.io/projected/e125fde3-1b33-4b6d-92ee-bdeada9c1202-kube-api-access-nv4w5\") pod \"auto-csr-approver-29489788-hq95m\" (UID: \"e125fde3-1b33-4b6d-92ee-bdeada9c1202\") " pod="openshift-infra/auto-csr-approver-29489788-hq95m" Jan 26 00:28:00 crc kubenswrapper[5110]: I0126 00:28:00.390938 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv4w5\" (UniqueName: \"kubernetes.io/projected/e125fde3-1b33-4b6d-92ee-bdeada9c1202-kube-api-access-nv4w5\") pod \"auto-csr-approver-29489788-hq95m\" (UID: \"e125fde3-1b33-4b6d-92ee-bdeada9c1202\") " pod="openshift-infra/auto-csr-approver-29489788-hq95m" Jan 26 00:28:00 crc kubenswrapper[5110]: I0126 00:28:00.487970 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489788-hq95m" Jan 26 00:28:00 crc kubenswrapper[5110]: I0126 00:28:00.801596 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489788-hq95m"] Jan 26 00:28:00 crc kubenswrapper[5110]: W0126 00:28:00.817208 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode125fde3_1b33_4b6d_92ee_bdeada9c1202.slice/crio-fbb31ac30caad8a27e90611e16b043259202338054a588293ea9707e86562a04 WatchSource:0}: Error finding container fbb31ac30caad8a27e90611e16b043259202338054a588293ea9707e86562a04: Status 404 returned error can't find the container with id fbb31ac30caad8a27e90611e16b043259202338054a588293ea9707e86562a04 Jan 26 00:28:01 crc kubenswrapper[5110]: I0126 00:28:01.585892 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489788-hq95m" event={"ID":"e125fde3-1b33-4b6d-92ee-bdeada9c1202","Type":"ContainerStarted","Data":"fbb31ac30caad8a27e90611e16b043259202338054a588293ea9707e86562a04"} Jan 26 00:28:02 crc kubenswrapper[5110]: I0126 00:28:02.598579 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489788-hq95m" event={"ID":"e125fde3-1b33-4b6d-92ee-bdeada9c1202","Type":"ContainerStarted","Data":"4d3cad43e3f6d13e2df60b786c8ff4ecbfe824f40f5c735710b47de9176f153c"} Jan 26 00:28:02 crc kubenswrapper[5110]: I0126 00:28:02.623004 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29489788-hq95m" podStartSLOduration=1.3054976649999999 podStartE2EDuration="2.622984903s" podCreationTimestamp="2026-01-26 00:28:00 +0000 UTC" firstStartedPulling="2026-01-26 00:28:00.819785292 +0000 UTC m=+1138.048683911" lastFinishedPulling="2026-01-26 00:28:02.13727254 +0000 UTC m=+1139.366171149" observedRunningTime="2026-01-26 00:28:02.620292547 +0000 UTC m=+1139.849191176" watchObservedRunningTime="2026-01-26 00:28:02.622984903 +0000 UTC m=+1139.851883512" Jan 26 00:28:03 crc kubenswrapper[5110]: I0126 00:28:03.610165 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489788-hq95m" event={"ID":"e125fde3-1b33-4b6d-92ee-bdeada9c1202","Type":"ContainerDied","Data":"4d3cad43e3f6d13e2df60b786c8ff4ecbfe824f40f5c735710b47de9176f153c"} Jan 26 00:28:03 crc kubenswrapper[5110]: I0126 00:28:03.610082 5110 generic.go:358] "Generic (PLEG): container finished" podID="e125fde3-1b33-4b6d-92ee-bdeada9c1202" containerID="4d3cad43e3f6d13e2df60b786c8ff4ecbfe824f40f5c735710b47de9176f153c" exitCode=0 Jan 26 00:28:04 crc kubenswrapper[5110]: I0126 00:28:04.986297 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489788-hq95m" Jan 26 00:28:05 crc kubenswrapper[5110]: I0126 00:28:05.149153 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nv4w5\" (UniqueName: \"kubernetes.io/projected/e125fde3-1b33-4b6d-92ee-bdeada9c1202-kube-api-access-nv4w5\") pod \"e125fde3-1b33-4b6d-92ee-bdeada9c1202\" (UID: \"e125fde3-1b33-4b6d-92ee-bdeada9c1202\") " Jan 26 00:28:05 crc kubenswrapper[5110]: I0126 00:28:05.161601 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e125fde3-1b33-4b6d-92ee-bdeada9c1202-kube-api-access-nv4w5" (OuterVolumeSpecName: "kube-api-access-nv4w5") pod "e125fde3-1b33-4b6d-92ee-bdeada9c1202" (UID: "e125fde3-1b33-4b6d-92ee-bdeada9c1202"). InnerVolumeSpecName "kube-api-access-nv4w5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:28:05 crc kubenswrapper[5110]: I0126 00:28:05.250886 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nv4w5\" (UniqueName: \"kubernetes.io/projected/e125fde3-1b33-4b6d-92ee-bdeada9c1202-kube-api-access-nv4w5\") on node \"crc\" DevicePath \"\"" Jan 26 00:28:05 crc kubenswrapper[5110]: I0126 00:28:05.633270 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489788-hq95m" event={"ID":"e125fde3-1b33-4b6d-92ee-bdeada9c1202","Type":"ContainerDied","Data":"fbb31ac30caad8a27e90611e16b043259202338054a588293ea9707e86562a04"} Jan 26 00:28:05 crc kubenswrapper[5110]: I0126 00:28:05.633343 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbb31ac30caad8a27e90611e16b043259202338054a588293ea9707e86562a04" Jan 26 00:28:05 crc kubenswrapper[5110]: I0126 00:28:05.633698 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489788-hq95m" Jan 26 00:28:05 crc kubenswrapper[5110]: I0126 00:28:05.686907 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29489782-dkgs5"] Jan 26 00:28:05 crc kubenswrapper[5110]: I0126 00:28:05.695370 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29489782-dkgs5"] Jan 26 00:28:07 crc kubenswrapper[5110]: I0126 00:28:07.331571 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="329081df-516b-4138-93f0-34cedd468e97" path="/var/lib/kubelet/pods/329081df-516b-4138-93f0-34cedd468e97/volumes" Jan 26 00:28:16 crc kubenswrapper[5110]: I0126 00:28:16.749157 5110 scope.go:117] "RemoveContainer" containerID="e9be00fbcb791d71d3d2fef5ce96cd93b1913b6df910e94162e4d45807b4b998" Jan 26 00:28:41 crc kubenswrapper[5110]: I0126 00:28:41.985584 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf/docker-build/0.log" Jan 26 00:28:41 crc kubenswrapper[5110]: I0126 00:28:41.987613 5110 generic.go:358] "Generic (PLEG): container finished" podID="2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf" containerID="8dde9018e28b8048721db73aa19b25a004e606fdbbb7ffc6c5e0ee584caa4421" exitCode=1 Jan 26 00:28:41 crc kubenswrapper[5110]: I0126 00:28:41.987737 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf","Type":"ContainerDied","Data":"8dde9018e28b8048721db73aa19b25a004e606fdbbb7ffc6c5e0ee584caa4421"} Jan 26 00:28:43 crc kubenswrapper[5110]: I0126 00:28:43.305485 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf/docker-build/0.log" Jan 26 00:28:43 crc kubenswrapper[5110]: I0126 00:28:43.307050 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:28:43 crc kubenswrapper[5110]: I0126 00:28:43.417010 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-build-proxy-ca-bundles\") pod \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " Jan 26 00:28:43 crc kubenswrapper[5110]: I0126 00:28:43.417118 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-container-storage-root\") pod \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " Jan 26 00:28:43 crc kubenswrapper[5110]: I0126 00:28:43.417254 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-buildworkdir\") pod \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " Jan 26 00:28:43 crc kubenswrapper[5110]: I0126 00:28:43.417292 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ng4dn-push\" (UniqueName: \"kubernetes.io/secret/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-builder-dockercfg-ng4dn-push\") pod \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " Jan 26 00:28:43 crc kubenswrapper[5110]: I0126 00:28:43.417361 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-build-system-configs\") pod \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " Jan 26 00:28:43 crc kubenswrapper[5110]: I0126 00:28:43.417404 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-node-pullsecrets\") pod \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " Jan 26 00:28:43 crc kubenswrapper[5110]: I0126 00:28:43.417471 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ng4dn-pull\" (UniqueName: \"kubernetes.io/secret/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-builder-dockercfg-ng4dn-pull\") pod \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " Jan 26 00:28:43 crc kubenswrapper[5110]: I0126 00:28:43.417510 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-container-storage-run\") pod \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " Jan 26 00:28:43 crc kubenswrapper[5110]: I0126 00:28:43.417540 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-build-blob-cache\") pod \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " Jan 26 00:28:43 crc kubenswrapper[5110]: I0126 00:28:43.417654 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4nrd\" (UniqueName: \"kubernetes.io/projected/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-kube-api-access-j4nrd\") pod \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " Jan 26 00:28:43 crc kubenswrapper[5110]: I0126 00:28:43.417768 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-build-ca-bundles\") pod \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " Jan 26 00:28:43 crc kubenswrapper[5110]: I0126 00:28:43.417857 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-buildcachedir\") pod \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\" (UID: \"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf\") " Jan 26 00:28:43 crc kubenswrapper[5110]: I0126 00:28:43.420873 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf" (UID: "2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:28:43 crc kubenswrapper[5110]: I0126 00:28:43.421221 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf" (UID: "2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:28:43 crc kubenswrapper[5110]: I0126 00:28:43.421259 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf" (UID: "2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:28:43 crc kubenswrapper[5110]: I0126 00:28:43.421291 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf" (UID: "2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:28:43 crc kubenswrapper[5110]: I0126 00:28:43.422272 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf" (UID: "2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:28:43 crc kubenswrapper[5110]: I0126 00:28:43.422356 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf" (UID: "2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:28:43 crc kubenswrapper[5110]: I0126 00:28:43.431009 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-builder-dockercfg-ng4dn-push" (OuterVolumeSpecName: "builder-dockercfg-ng4dn-push") pod "2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf" (UID: "2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf"). InnerVolumeSpecName "builder-dockercfg-ng4dn-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:28:43 crc kubenswrapper[5110]: I0126 00:28:43.431079 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-builder-dockercfg-ng4dn-pull" (OuterVolumeSpecName: "builder-dockercfg-ng4dn-pull") pod "2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf" (UID: "2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf"). InnerVolumeSpecName "builder-dockercfg-ng4dn-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:28:43 crc kubenswrapper[5110]: I0126 00:28:43.431116 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-kube-api-access-j4nrd" (OuterVolumeSpecName: "kube-api-access-j4nrd") pod "2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf" (UID: "2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf"). InnerVolumeSpecName "kube-api-access-j4nrd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:28:43 crc kubenswrapper[5110]: I0126 00:28:43.480507 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf" (UID: "2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:28:43 crc kubenswrapper[5110]: I0126 00:28:43.520181 5110 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:28:43 crc kubenswrapper[5110]: I0126 00:28:43.521005 5110 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 26 00:28:43 crc kubenswrapper[5110]: I0126 00:28:43.521208 5110 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-ng4dn-push\" (UniqueName: \"kubernetes.io/secret/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-builder-dockercfg-ng4dn-push\") on node \"crc\" DevicePath \"\"" Jan 26 00:28:43 crc kubenswrapper[5110]: I0126 00:28:43.521402 5110 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 26 00:28:43 crc kubenswrapper[5110]: I0126 00:28:43.521545 5110 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 26 00:28:43 crc kubenswrapper[5110]: I0126 00:28:43.521689 5110 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-ng4dn-pull\" (UniqueName: \"kubernetes.io/secret/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-builder-dockercfg-ng4dn-pull\") on node \"crc\" DevicePath \"\"" Jan 26 00:28:43 crc kubenswrapper[5110]: I0126 00:28:43.521906 5110 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 26 00:28:43 crc kubenswrapper[5110]: I0126 00:28:43.522056 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j4nrd\" (UniqueName: \"kubernetes.io/projected/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-kube-api-access-j4nrd\") on node \"crc\" DevicePath \"\"" Jan 26 00:28:43 crc kubenswrapper[5110]: I0126 00:28:43.522234 5110 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:28:43 crc kubenswrapper[5110]: I0126 00:28:43.522430 5110 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 26 00:28:43 crc kubenswrapper[5110]: I0126 00:28:43.659132 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf" (UID: "2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:28:43 crc kubenswrapper[5110]: I0126 00:28:43.724740 5110 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 26 00:28:44 crc kubenswrapper[5110]: I0126 00:28:44.008135 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf/docker-build/0.log" Jan 26 00:28:44 crc kubenswrapper[5110]: I0126 00:28:44.009767 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:28:44 crc kubenswrapper[5110]: I0126 00:28:44.009867 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf","Type":"ContainerDied","Data":"1febe7528c1a841567eee8f4fd93a0b77f33a96b65648c6610870e34462ea59a"} Jan 26 00:28:44 crc kubenswrapper[5110]: I0126 00:28:44.010065 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1febe7528c1a841567eee8f4fd93a0b77f33a96b65648c6610870e34462ea59a" Jan 26 00:28:46 crc kubenswrapper[5110]: I0126 00:28:46.000998 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf" (UID: "2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:28:46 crc kubenswrapper[5110]: I0126 00:28:46.066313 5110 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.806630 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.809886 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf" containerName="manage-dockerfile" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.810045 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf" containerName="manage-dockerfile" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.810195 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e125fde3-1b33-4b6d-92ee-bdeada9c1202" containerName="oc" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.810312 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="e125fde3-1b33-4b6d-92ee-bdeada9c1202" containerName="oc" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.810480 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf" containerName="git-clone" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.810598 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf" containerName="git-clone" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.810725 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf" containerName="docker-build" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.810870 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf" containerName="docker-build" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.811177 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf" containerName="docker-build" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.811294 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="e125fde3-1b33-4b6d-92ee-bdeada9c1202" containerName="oc" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.818920 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.821223 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-5-sys-config\"" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.821752 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-5-ca\"" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.822077 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-ng4dn\"" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.822394 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-5-global-ca\"" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.857308 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.891012 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61968fab-ac43-43ab-97ea-814704095718-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.891063 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ng4dn-pull\" (UniqueName: \"kubernetes.io/secret/61968fab-ac43-43ab-97ea-814704095718-builder-dockercfg-ng4dn-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.891103 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61968fab-ac43-43ab-97ea-814704095718-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.891154 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/61968fab-ac43-43ab-97ea-814704095718-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.891184 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ng4dn-push\" (UniqueName: \"kubernetes.io/secret/61968fab-ac43-43ab-97ea-814704095718-builder-dockercfg-ng4dn-push\") pod \"service-telemetry-operator-5-build\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.891218 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/61968fab-ac43-43ab-97ea-814704095718-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.891234 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/61968fab-ac43-43ab-97ea-814704095718-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.891252 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/61968fab-ac43-43ab-97ea-814704095718-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.891347 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/61968fab-ac43-43ab-97ea-814704095718-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.891534 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/61968fab-ac43-43ab-97ea-814704095718-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.891577 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rk29\" (UniqueName: \"kubernetes.io/projected/61968fab-ac43-43ab-97ea-814704095718-kube-api-access-9rk29\") pod \"service-telemetry-operator-5-build\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.891644 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/61968fab-ac43-43ab-97ea-814704095718-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.993470 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/61968fab-ac43-43ab-97ea-814704095718-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.993524 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9rk29\" (UniqueName: \"kubernetes.io/projected/61968fab-ac43-43ab-97ea-814704095718-kube-api-access-9rk29\") pod \"service-telemetry-operator-5-build\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.993563 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/61968fab-ac43-43ab-97ea-814704095718-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.993708 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/61968fab-ac43-43ab-97ea-814704095718-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.993772 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61968fab-ac43-43ab-97ea-814704095718-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.993855 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ng4dn-pull\" (UniqueName: \"kubernetes.io/secret/61968fab-ac43-43ab-97ea-814704095718-builder-dockercfg-ng4dn-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.994018 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/61968fab-ac43-43ab-97ea-814704095718-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.994050 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61968fab-ac43-43ab-97ea-814704095718-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.994580 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61968fab-ac43-43ab-97ea-814704095718-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.994894 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/61968fab-ac43-43ab-97ea-814704095718-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.994930 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ng4dn-push\" (UniqueName: \"kubernetes.io/secret/61968fab-ac43-43ab-97ea-814704095718-builder-dockercfg-ng4dn-push\") pod \"service-telemetry-operator-5-build\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.994990 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/61968fab-ac43-43ab-97ea-814704095718-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.995017 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/61968fab-ac43-43ab-97ea-814704095718-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.995055 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/61968fab-ac43-43ab-97ea-814704095718-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.995078 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/61968fab-ac43-43ab-97ea-814704095718-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.995168 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/61968fab-ac43-43ab-97ea-814704095718-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.995252 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61968fab-ac43-43ab-97ea-814704095718-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.995392 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/61968fab-ac43-43ab-97ea-814704095718-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.995421 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/61968fab-ac43-43ab-97ea-814704095718-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.995461 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/61968fab-ac43-43ab-97ea-814704095718-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:28:53 crc kubenswrapper[5110]: I0126 00:28:53.996088 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/61968fab-ac43-43ab-97ea-814704095718-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:28:54 crc kubenswrapper[5110]: I0126 00:28:54.003051 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ng4dn-pull\" (UniqueName: \"kubernetes.io/secret/61968fab-ac43-43ab-97ea-814704095718-builder-dockercfg-ng4dn-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:28:54 crc kubenswrapper[5110]: I0126 00:28:54.003571 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ng4dn-push\" (UniqueName: \"kubernetes.io/secret/61968fab-ac43-43ab-97ea-814704095718-builder-dockercfg-ng4dn-push\") pod \"service-telemetry-operator-5-build\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:28:54 crc kubenswrapper[5110]: I0126 00:28:54.025177 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rk29\" (UniqueName: \"kubernetes.io/projected/61968fab-ac43-43ab-97ea-814704095718-kube-api-access-9rk29\") pod \"service-telemetry-operator-5-build\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:28:54 crc kubenswrapper[5110]: I0126 00:28:54.139205 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:28:54 crc kubenswrapper[5110]: I0126 00:28:54.515962 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Jan 26 00:28:55 crc kubenswrapper[5110]: I0126 00:28:55.148247 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"61968fab-ac43-43ab-97ea-814704095718","Type":"ContainerStarted","Data":"e26f58964e8845901979050c3150031783ea8a7177ceeb15a78fa7fd71b416f0"} Jan 26 00:28:55 crc kubenswrapper[5110]: I0126 00:28:55.148977 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"61968fab-ac43-43ab-97ea-814704095718","Type":"ContainerStarted","Data":"1ac0c3ba274a1c0b3c137f34e4571608fe8772d1a94dba46e772830d8d657f37"} Jan 26 00:29:03 crc kubenswrapper[5110]: I0126 00:29:03.671138 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf/docker-build/0.log" Jan 26 00:29:03 crc kubenswrapper[5110]: I0126 00:29:03.674515 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_f8838ea6-b825-4903-a386-edaf6e7372c6/docker-build/0.log" Jan 26 00:29:03 crc kubenswrapper[5110]: I0126 00:29:03.681972 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_0ec8732b-c9f0-4b4e-8774-673da5c59114/docker-build/0.log" Jan 26 00:29:03 crc kubenswrapper[5110]: I0126 00:29:03.683171 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf/docker-build/0.log" Jan 26 00:29:03 crc kubenswrapper[5110]: I0126 00:29:03.684362 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_f8838ea6-b825-4903-a386-edaf6e7372c6/docker-build/0.log" Jan 26 00:29:03 crc kubenswrapper[5110]: I0126 00:29:03.686393 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_0ec8732b-c9f0-4b4e-8774-673da5c59114/docker-build/0.log" Jan 26 00:29:03 crc kubenswrapper[5110]: I0126 00:29:03.729175 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jh4hk_f2948d2b-fac7-4f3f-8b5f-f6f9c914daec/kube-multus/0.log" Jan 26 00:29:03 crc kubenswrapper[5110]: I0126 00:29:03.733964 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jh4hk_f2948d2b-fac7-4f3f-8b5f-f6f9c914daec/kube-multus/0.log" Jan 26 00:29:03 crc kubenswrapper[5110]: I0126 00:29:03.741519 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 26 00:29:03 crc kubenswrapper[5110]: I0126 00:29:03.741667 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 26 00:29:04 crc kubenswrapper[5110]: I0126 00:29:04.231500 5110 generic.go:358] "Generic (PLEG): container finished" podID="61968fab-ac43-43ab-97ea-814704095718" containerID="e26f58964e8845901979050c3150031783ea8a7177ceeb15a78fa7fd71b416f0" exitCode=0 Jan 26 00:29:04 crc kubenswrapper[5110]: I0126 00:29:04.231974 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"61968fab-ac43-43ab-97ea-814704095718","Type":"ContainerDied","Data":"e26f58964e8845901979050c3150031783ea8a7177ceeb15a78fa7fd71b416f0"} Jan 26 00:29:05 crc kubenswrapper[5110]: I0126 00:29:05.241744 5110 generic.go:358] "Generic (PLEG): container finished" podID="61968fab-ac43-43ab-97ea-814704095718" containerID="17c047f425a1682211a6781c14f11618cf9da51540718b4701a31250613010ad" exitCode=0 Jan 26 00:29:05 crc kubenswrapper[5110]: I0126 00:29:05.242571 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"61968fab-ac43-43ab-97ea-814704095718","Type":"ContainerDied","Data":"17c047f425a1682211a6781c14f11618cf9da51540718b4701a31250613010ad"} Jan 26 00:29:05 crc kubenswrapper[5110]: I0126 00:29:05.315324 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_61968fab-ac43-43ab-97ea-814704095718/manage-dockerfile/0.log" Jan 26 00:29:06 crc kubenswrapper[5110]: I0126 00:29:06.254395 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"61968fab-ac43-43ab-97ea-814704095718","Type":"ContainerStarted","Data":"57652ae4f620f350069391598d76688be4b9c879aa7b36461d690fa2833b037e"} Jan 26 00:29:06 crc kubenswrapper[5110]: I0126 00:29:06.298499 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-5-build" podStartSLOduration=13.298475263 podStartE2EDuration="13.298475263s" podCreationTimestamp="2026-01-26 00:28:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:29:06.295754505 +0000 UTC m=+1203.524653134" watchObservedRunningTime="2026-01-26 00:29:06.298475263 +0000 UTC m=+1203.527373912" Jan 26 00:30:00 crc kubenswrapper[5110]: I0126 00:30:00.144157 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489790-67jlp"] Jan 26 00:30:00 crc kubenswrapper[5110]: I0126 00:30:00.162456 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489790-7mjlr"] Jan 26 00:30:00 crc kubenswrapper[5110]: I0126 00:30:00.162692 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-67jlp" Jan 26 00:30:00 crc kubenswrapper[5110]: I0126 00:30:00.165420 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 26 00:30:00 crc kubenswrapper[5110]: I0126 00:30:00.165454 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 26 00:30:00 crc kubenswrapper[5110]: I0126 00:30:00.167435 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489790-67jlp"] Jan 26 00:30:00 crc kubenswrapper[5110]: I0126 00:30:00.167584 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489790-7mjlr" Jan 26 00:30:00 crc kubenswrapper[5110]: I0126 00:30:00.169880 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dfffg\"" Jan 26 00:30:00 crc kubenswrapper[5110]: I0126 00:30:00.169981 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:30:00 crc kubenswrapper[5110]: I0126 00:30:00.170754 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:30:00 crc kubenswrapper[5110]: I0126 00:30:00.172747 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489790-7mjlr"] Jan 26 00:30:00 crc kubenswrapper[5110]: I0126 00:30:00.245476 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-758ft\" (UniqueName: \"kubernetes.io/projected/f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c-kube-api-access-758ft\") pod \"collect-profiles-29489790-67jlp\" (UID: \"f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-67jlp" Jan 26 00:30:00 crc kubenswrapper[5110]: I0126 00:30:00.245574 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c-config-volume\") pod \"collect-profiles-29489790-67jlp\" (UID: \"f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-67jlp" Jan 26 00:30:00 crc kubenswrapper[5110]: I0126 00:30:00.245733 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c-secret-volume\") pod \"collect-profiles-29489790-67jlp\" (UID: \"f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-67jlp" Jan 26 00:30:00 crc kubenswrapper[5110]: I0126 00:30:00.347643 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c-secret-volume\") pod \"collect-profiles-29489790-67jlp\" (UID: \"f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-67jlp" Jan 26 00:30:00 crc kubenswrapper[5110]: I0126 00:30:00.347717 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgsfz\" (UniqueName: \"kubernetes.io/projected/c184d7fd-c527-4955-920d-d058efb87466-kube-api-access-dgsfz\") pod \"auto-csr-approver-29489790-7mjlr\" (UID: \"c184d7fd-c527-4955-920d-d058efb87466\") " pod="openshift-infra/auto-csr-approver-29489790-7mjlr" Jan 26 00:30:00 crc kubenswrapper[5110]: I0126 00:30:00.349286 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-758ft\" (UniqueName: \"kubernetes.io/projected/f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c-kube-api-access-758ft\") pod \"collect-profiles-29489790-67jlp\" (UID: \"f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-67jlp" Jan 26 00:30:00 crc kubenswrapper[5110]: I0126 00:30:00.349404 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c-config-volume\") pod \"collect-profiles-29489790-67jlp\" (UID: \"f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-67jlp" Jan 26 00:30:00 crc kubenswrapper[5110]: I0126 00:30:00.350689 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c-config-volume\") pod \"collect-profiles-29489790-67jlp\" (UID: \"f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-67jlp" Jan 26 00:30:00 crc kubenswrapper[5110]: I0126 00:30:00.372041 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c-secret-volume\") pod \"collect-profiles-29489790-67jlp\" (UID: \"f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-67jlp" Jan 26 00:30:00 crc kubenswrapper[5110]: I0126 00:30:00.372622 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-758ft\" (UniqueName: \"kubernetes.io/projected/f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c-kube-api-access-758ft\") pod \"collect-profiles-29489790-67jlp\" (UID: \"f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-67jlp" Jan 26 00:30:00 crc kubenswrapper[5110]: I0126 00:30:00.450726 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dgsfz\" (UniqueName: \"kubernetes.io/projected/c184d7fd-c527-4955-920d-d058efb87466-kube-api-access-dgsfz\") pod \"auto-csr-approver-29489790-7mjlr\" (UID: \"c184d7fd-c527-4955-920d-d058efb87466\") " pod="openshift-infra/auto-csr-approver-29489790-7mjlr" Jan 26 00:30:00 crc kubenswrapper[5110]: I0126 00:30:00.470185 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgsfz\" (UniqueName: \"kubernetes.io/projected/c184d7fd-c527-4955-920d-d058efb87466-kube-api-access-dgsfz\") pod \"auto-csr-approver-29489790-7mjlr\" (UID: \"c184d7fd-c527-4955-920d-d058efb87466\") " pod="openshift-infra/auto-csr-approver-29489790-7mjlr" Jan 26 00:30:00 crc kubenswrapper[5110]: I0126 00:30:00.529617 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-67jlp" Jan 26 00:30:00 crc kubenswrapper[5110]: I0126 00:30:00.542656 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489790-7mjlr" Jan 26 00:30:00 crc kubenswrapper[5110]: I0126 00:30:00.757139 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489790-67jlp"] Jan 26 00:30:00 crc kubenswrapper[5110]: I0126 00:30:00.808616 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489790-7mjlr"] Jan 26 00:30:00 crc kubenswrapper[5110]: W0126 00:30:00.820975 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc184d7fd_c527_4955_920d_d058efb87466.slice/crio-27385ab256f067f43a981bceeb9f7b9ef0ca611c0efc377e45da3cbc39735b85 WatchSource:0}: Error finding container 27385ab256f067f43a981bceeb9f7b9ef0ca611c0efc377e45da3cbc39735b85: Status 404 returned error can't find the container with id 27385ab256f067f43a981bceeb9f7b9ef0ca611c0efc377e45da3cbc39735b85 Jan 26 00:30:01 crc kubenswrapper[5110]: I0126 00:30:01.713075 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489790-7mjlr" event={"ID":"c184d7fd-c527-4955-920d-d058efb87466","Type":"ContainerStarted","Data":"27385ab256f067f43a981bceeb9f7b9ef0ca611c0efc377e45da3cbc39735b85"} Jan 26 00:30:01 crc kubenswrapper[5110]: I0126 00:30:01.714552 5110 generic.go:358] "Generic (PLEG): container finished" podID="f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c" containerID="2ae7c252e3e936e31475fa3e58ea570a9f29be9283401e91dc81eadc3031c294" exitCode=0 Jan 26 00:30:01 crc kubenswrapper[5110]: I0126 00:30:01.714688 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-67jlp" event={"ID":"f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c","Type":"ContainerDied","Data":"2ae7c252e3e936e31475fa3e58ea570a9f29be9283401e91dc81eadc3031c294"} Jan 26 00:30:01 crc kubenswrapper[5110]: I0126 00:30:01.714708 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-67jlp" event={"ID":"f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c","Type":"ContainerStarted","Data":"541c6ac94029a63be5b536d01248df5e9467ca557dea6845eab8b5a902520d14"} Jan 26 00:30:02 crc kubenswrapper[5110]: I0126 00:30:02.723443 5110 generic.go:358] "Generic (PLEG): container finished" podID="c184d7fd-c527-4955-920d-d058efb87466" containerID="a5c47539112c424d698b49041ce316d04047ffeaa1aeef92aa0bb5cabbea8e1b" exitCode=0 Jan 26 00:30:02 crc kubenswrapper[5110]: I0126 00:30:02.723601 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489790-7mjlr" event={"ID":"c184d7fd-c527-4955-920d-d058efb87466","Type":"ContainerDied","Data":"a5c47539112c424d698b49041ce316d04047ffeaa1aeef92aa0bb5cabbea8e1b"} Jan 26 00:30:02 crc kubenswrapper[5110]: I0126 00:30:02.992050 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-67jlp" Jan 26 00:30:03 crc kubenswrapper[5110]: I0126 00:30:03.091440 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-758ft\" (UniqueName: \"kubernetes.io/projected/f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c-kube-api-access-758ft\") pod \"f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c\" (UID: \"f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c\") " Jan 26 00:30:03 crc kubenswrapper[5110]: I0126 00:30:03.091498 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c-config-volume\") pod \"f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c\" (UID: \"f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c\") " Jan 26 00:30:03 crc kubenswrapper[5110]: I0126 00:30:03.091591 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c-secret-volume\") pod \"f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c\" (UID: \"f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c\") " Jan 26 00:30:03 crc kubenswrapper[5110]: I0126 00:30:03.092708 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c-config-volume" (OuterVolumeSpecName: "config-volume") pod "f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c" (UID: "f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:30:03 crc kubenswrapper[5110]: I0126 00:30:03.098348 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c-kube-api-access-758ft" (OuterVolumeSpecName: "kube-api-access-758ft") pod "f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c" (UID: "f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c"). InnerVolumeSpecName "kube-api-access-758ft". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:30:03 crc kubenswrapper[5110]: I0126 00:30:03.098977 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c" (UID: "f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:30:03 crc kubenswrapper[5110]: I0126 00:30:03.193160 5110 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:03 crc kubenswrapper[5110]: I0126 00:30:03.193458 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-758ft\" (UniqueName: \"kubernetes.io/projected/f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c-kube-api-access-758ft\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:03 crc kubenswrapper[5110]: I0126 00:30:03.193524 5110 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:03 crc kubenswrapper[5110]: I0126 00:30:03.731782 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-67jlp" Jan 26 00:30:03 crc kubenswrapper[5110]: I0126 00:30:03.731787 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-67jlp" event={"ID":"f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c","Type":"ContainerDied","Data":"541c6ac94029a63be5b536d01248df5e9467ca557dea6845eab8b5a902520d14"} Jan 26 00:30:03 crc kubenswrapper[5110]: I0126 00:30:03.733079 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="541c6ac94029a63be5b536d01248df5e9467ca557dea6845eab8b5a902520d14" Jan 26 00:30:04 crc kubenswrapper[5110]: I0126 00:30:04.041636 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489790-7mjlr" Jan 26 00:30:04 crc kubenswrapper[5110]: I0126 00:30:04.142671 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgsfz\" (UniqueName: \"kubernetes.io/projected/c184d7fd-c527-4955-920d-d058efb87466-kube-api-access-dgsfz\") pod \"c184d7fd-c527-4955-920d-d058efb87466\" (UID: \"c184d7fd-c527-4955-920d-d058efb87466\") " Jan 26 00:30:04 crc kubenswrapper[5110]: I0126 00:30:04.149145 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c184d7fd-c527-4955-920d-d058efb87466-kube-api-access-dgsfz" (OuterVolumeSpecName: "kube-api-access-dgsfz") pod "c184d7fd-c527-4955-920d-d058efb87466" (UID: "c184d7fd-c527-4955-920d-d058efb87466"). InnerVolumeSpecName "kube-api-access-dgsfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:30:04 crc kubenswrapper[5110]: I0126 00:30:04.244899 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dgsfz\" (UniqueName: \"kubernetes.io/projected/c184d7fd-c527-4955-920d-d058efb87466-kube-api-access-dgsfz\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:04 crc kubenswrapper[5110]: I0126 00:30:04.739316 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489790-7mjlr" event={"ID":"c184d7fd-c527-4955-920d-d058efb87466","Type":"ContainerDied","Data":"27385ab256f067f43a981bceeb9f7b9ef0ca611c0efc377e45da3cbc39735b85"} Jan 26 00:30:04 crc kubenswrapper[5110]: I0126 00:30:04.739686 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27385ab256f067f43a981bceeb9f7b9ef0ca611c0efc377e45da3cbc39735b85" Jan 26 00:30:04 crc kubenswrapper[5110]: I0126 00:30:04.739339 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489790-7mjlr" Jan 26 00:30:05 crc kubenswrapper[5110]: I0126 00:30:05.106384 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29489784-hz7xq"] Jan 26 00:30:05 crc kubenswrapper[5110]: I0126 00:30:05.119781 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29489784-hz7xq"] Jan 26 00:30:05 crc kubenswrapper[5110]: I0126 00:30:05.325383 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7352d7ab-22b1-4a56-9e90-4fc47ffead42" path="/var/lib/kubelet/pods/7352d7ab-22b1-4a56-9e90-4fc47ffead42/volumes" Jan 26 00:30:16 crc kubenswrapper[5110]: I0126 00:30:16.858607 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_61968fab-ac43-43ab-97ea-814704095718/docker-build/0.log" Jan 26 00:30:16 crc kubenswrapper[5110]: I0126 00:30:16.861550 5110 generic.go:358] "Generic (PLEG): container finished" podID="61968fab-ac43-43ab-97ea-814704095718" containerID="57652ae4f620f350069391598d76688be4b9c879aa7b36461d690fa2833b037e" exitCode=1 Jan 26 00:30:16 crc kubenswrapper[5110]: I0126 00:30:16.861675 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"61968fab-ac43-43ab-97ea-814704095718","Type":"ContainerDied","Data":"57652ae4f620f350069391598d76688be4b9c879aa7b36461d690fa2833b037e"} Jan 26 00:30:16 crc kubenswrapper[5110]: I0126 00:30:16.916152 5110 scope.go:117] "RemoveContainer" containerID="7a4401a4ad33e42d6d1493799097d49f967bdd77ec8a82024579594828e49f49" Jan 26 00:30:18 crc kubenswrapper[5110]: I0126 00:30:18.235429 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_61968fab-ac43-43ab-97ea-814704095718/docker-build/0.log" Jan 26 00:30:18 crc kubenswrapper[5110]: I0126 00:30:18.237334 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:30:18 crc kubenswrapper[5110]: I0126 00:30:18.378918 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ng4dn-pull\" (UniqueName: \"kubernetes.io/secret/61968fab-ac43-43ab-97ea-814704095718-builder-dockercfg-ng4dn-pull\") pod \"61968fab-ac43-43ab-97ea-814704095718\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " Jan 26 00:30:18 crc kubenswrapper[5110]: I0126 00:30:18.379039 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ng4dn-push\" (UniqueName: \"kubernetes.io/secret/61968fab-ac43-43ab-97ea-814704095718-builder-dockercfg-ng4dn-push\") pod \"61968fab-ac43-43ab-97ea-814704095718\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " Jan 26 00:30:18 crc kubenswrapper[5110]: I0126 00:30:18.379066 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/61968fab-ac43-43ab-97ea-814704095718-node-pullsecrets\") pod \"61968fab-ac43-43ab-97ea-814704095718\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " Jan 26 00:30:18 crc kubenswrapper[5110]: I0126 00:30:18.379094 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/61968fab-ac43-43ab-97ea-814704095718-buildworkdir\") pod \"61968fab-ac43-43ab-97ea-814704095718\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " Jan 26 00:30:18 crc kubenswrapper[5110]: I0126 00:30:18.379141 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61968fab-ac43-43ab-97ea-814704095718-build-proxy-ca-bundles\") pod \"61968fab-ac43-43ab-97ea-814704095718\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " Jan 26 00:30:18 crc kubenswrapper[5110]: I0126 00:30:18.379159 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/61968fab-ac43-43ab-97ea-814704095718-container-storage-run\") pod \"61968fab-ac43-43ab-97ea-814704095718\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " Jan 26 00:30:18 crc kubenswrapper[5110]: I0126 00:30:18.379197 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/61968fab-ac43-43ab-97ea-814704095718-build-blob-cache\") pod \"61968fab-ac43-43ab-97ea-814704095718\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " Jan 26 00:30:18 crc kubenswrapper[5110]: I0126 00:30:18.379246 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61968fab-ac43-43ab-97ea-814704095718-build-ca-bundles\") pod \"61968fab-ac43-43ab-97ea-814704095718\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " Jan 26 00:30:18 crc kubenswrapper[5110]: I0126 00:30:18.379296 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/61968fab-ac43-43ab-97ea-814704095718-buildcachedir\") pod \"61968fab-ac43-43ab-97ea-814704095718\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " Jan 26 00:30:18 crc kubenswrapper[5110]: I0126 00:30:18.379325 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/61968fab-ac43-43ab-97ea-814704095718-container-storage-root\") pod \"61968fab-ac43-43ab-97ea-814704095718\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " Jan 26 00:30:18 crc kubenswrapper[5110]: I0126 00:30:18.379365 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/61968fab-ac43-43ab-97ea-814704095718-build-system-configs\") pod \"61968fab-ac43-43ab-97ea-814704095718\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " Jan 26 00:30:18 crc kubenswrapper[5110]: I0126 00:30:18.379394 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rk29\" (UniqueName: \"kubernetes.io/projected/61968fab-ac43-43ab-97ea-814704095718-kube-api-access-9rk29\") pod \"61968fab-ac43-43ab-97ea-814704095718\" (UID: \"61968fab-ac43-43ab-97ea-814704095718\") " Jan 26 00:30:18 crc kubenswrapper[5110]: I0126 00:30:18.380081 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61968fab-ac43-43ab-97ea-814704095718-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "61968fab-ac43-43ab-97ea-814704095718" (UID: "61968fab-ac43-43ab-97ea-814704095718"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:30:18 crc kubenswrapper[5110]: I0126 00:30:18.380625 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61968fab-ac43-43ab-97ea-814704095718-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "61968fab-ac43-43ab-97ea-814704095718" (UID: "61968fab-ac43-43ab-97ea-814704095718"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:30:18 crc kubenswrapper[5110]: I0126 00:30:18.381739 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61968fab-ac43-43ab-97ea-814704095718-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "61968fab-ac43-43ab-97ea-814704095718" (UID: "61968fab-ac43-43ab-97ea-814704095718"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:30:18 crc kubenswrapper[5110]: I0126 00:30:18.383048 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61968fab-ac43-43ab-97ea-814704095718-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "61968fab-ac43-43ab-97ea-814704095718" (UID: "61968fab-ac43-43ab-97ea-814704095718"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:30:18 crc kubenswrapper[5110]: I0126 00:30:18.383175 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61968fab-ac43-43ab-97ea-814704095718-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "61968fab-ac43-43ab-97ea-814704095718" (UID: "61968fab-ac43-43ab-97ea-814704095718"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:30:18 crc kubenswrapper[5110]: I0126 00:30:18.383271 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61968fab-ac43-43ab-97ea-814704095718-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "61968fab-ac43-43ab-97ea-814704095718" (UID: "61968fab-ac43-43ab-97ea-814704095718"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:30:18 crc kubenswrapper[5110]: I0126 00:30:18.387214 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61968fab-ac43-43ab-97ea-814704095718-builder-dockercfg-ng4dn-pull" (OuterVolumeSpecName: "builder-dockercfg-ng4dn-pull") pod "61968fab-ac43-43ab-97ea-814704095718" (UID: "61968fab-ac43-43ab-97ea-814704095718"). InnerVolumeSpecName "builder-dockercfg-ng4dn-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:30:18 crc kubenswrapper[5110]: I0126 00:30:18.387618 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61968fab-ac43-43ab-97ea-814704095718-builder-dockercfg-ng4dn-push" (OuterVolumeSpecName: "builder-dockercfg-ng4dn-push") pod "61968fab-ac43-43ab-97ea-814704095718" (UID: "61968fab-ac43-43ab-97ea-814704095718"). InnerVolumeSpecName "builder-dockercfg-ng4dn-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:30:18 crc kubenswrapper[5110]: I0126 00:30:18.387648 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61968fab-ac43-43ab-97ea-814704095718-kube-api-access-9rk29" (OuterVolumeSpecName: "kube-api-access-9rk29") pod "61968fab-ac43-43ab-97ea-814704095718" (UID: "61968fab-ac43-43ab-97ea-814704095718"). InnerVolumeSpecName "kube-api-access-9rk29". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:30:18 crc kubenswrapper[5110]: I0126 00:30:18.424750 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61968fab-ac43-43ab-97ea-814704095718-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "61968fab-ac43-43ab-97ea-814704095718" (UID: "61968fab-ac43-43ab-97ea-814704095718"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:30:18 crc kubenswrapper[5110]: I0126 00:30:18.480909 5110 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61968fab-ac43-43ab-97ea-814704095718-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:18 crc kubenswrapper[5110]: I0126 00:30:18.480970 5110 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/61968fab-ac43-43ab-97ea-814704095718-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:18 crc kubenswrapper[5110]: I0126 00:30:18.480981 5110 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/61968fab-ac43-43ab-97ea-814704095718-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:18 crc kubenswrapper[5110]: I0126 00:30:18.480993 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9rk29\" (UniqueName: \"kubernetes.io/projected/61968fab-ac43-43ab-97ea-814704095718-kube-api-access-9rk29\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:18 crc kubenswrapper[5110]: I0126 00:30:18.481002 5110 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-ng4dn-pull\" (UniqueName: \"kubernetes.io/secret/61968fab-ac43-43ab-97ea-814704095718-builder-dockercfg-ng4dn-pull\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:18 crc kubenswrapper[5110]: I0126 00:30:18.481011 5110 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-ng4dn-push\" (UniqueName: \"kubernetes.io/secret/61968fab-ac43-43ab-97ea-814704095718-builder-dockercfg-ng4dn-push\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:18 crc kubenswrapper[5110]: I0126 00:30:18.481019 5110 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/61968fab-ac43-43ab-97ea-814704095718-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:18 crc kubenswrapper[5110]: I0126 00:30:18.481027 5110 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/61968fab-ac43-43ab-97ea-814704095718-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:18 crc kubenswrapper[5110]: I0126 00:30:18.481036 5110 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61968fab-ac43-43ab-97ea-814704095718-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:18 crc kubenswrapper[5110]: I0126 00:30:18.481044 5110 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/61968fab-ac43-43ab-97ea-814704095718-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:18 crc kubenswrapper[5110]: I0126 00:30:18.638982 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61968fab-ac43-43ab-97ea-814704095718-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "61968fab-ac43-43ab-97ea-814704095718" (UID: "61968fab-ac43-43ab-97ea-814704095718"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:30:18 crc kubenswrapper[5110]: I0126 00:30:18.683598 5110 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/61968fab-ac43-43ab-97ea-814704095718-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:18 crc kubenswrapper[5110]: I0126 00:30:18.884143 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_61968fab-ac43-43ab-97ea-814704095718/docker-build/0.log" Jan 26 00:30:18 crc kubenswrapper[5110]: I0126 00:30:18.886076 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"61968fab-ac43-43ab-97ea-814704095718","Type":"ContainerDied","Data":"1ac0c3ba274a1c0b3c137f34e4571608fe8772d1a94dba46e772830d8d657f37"} Jan 26 00:30:18 crc kubenswrapper[5110]: I0126 00:30:18.886132 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ac0c3ba274a1c0b3c137f34e4571608fe8772d1a94dba46e772830d8d657f37" Jan 26 00:30:18 crc kubenswrapper[5110]: I0126 00:30:18.886193 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:30:20 crc kubenswrapper[5110]: I0126 00:30:20.536421 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61968fab-ac43-43ab-97ea-814704095718-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "61968fab-ac43-43ab-97ea-814704095718" (UID: "61968fab-ac43-43ab-97ea-814704095718"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:30:20 crc kubenswrapper[5110]: I0126 00:30:20.615135 5110 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/61968fab-ac43-43ab-97ea-814704095718-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:26 crc kubenswrapper[5110]: I0126 00:30:26.813072 5110 patch_prober.go:28] interesting pod/machine-config-daemon-c6tpr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:30:26 crc kubenswrapper[5110]: I0126 00:30:26.813658 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" podUID="f15bed73-d669-439f-9828-7b952d9bfe65" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:30:56 crc kubenswrapper[5110]: I0126 00:30:56.661387 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mw7kl/must-gather-hssx4"] Jan 26 00:30:56 crc kubenswrapper[5110]: I0126 00:30:56.663074 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="61968fab-ac43-43ab-97ea-814704095718" containerName="manage-dockerfile" Jan 26 00:30:56 crc kubenswrapper[5110]: I0126 00:30:56.663099 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="61968fab-ac43-43ab-97ea-814704095718" containerName="manage-dockerfile" Jan 26 00:30:56 crc kubenswrapper[5110]: I0126 00:30:56.663114 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="61968fab-ac43-43ab-97ea-814704095718" containerName="git-clone" Jan 26 00:30:56 crc kubenswrapper[5110]: I0126 00:30:56.663125 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="61968fab-ac43-43ab-97ea-814704095718" containerName="git-clone" Jan 26 00:30:56 crc kubenswrapper[5110]: I0126 00:30:56.663139 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c" containerName="collect-profiles" Jan 26 00:30:56 crc kubenswrapper[5110]: I0126 00:30:56.663149 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c" containerName="collect-profiles" Jan 26 00:30:56 crc kubenswrapper[5110]: I0126 00:30:56.663190 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c184d7fd-c527-4955-920d-d058efb87466" containerName="oc" Jan 26 00:30:56 crc kubenswrapper[5110]: I0126 00:30:56.663201 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="c184d7fd-c527-4955-920d-d058efb87466" containerName="oc" Jan 26 00:30:56 crc kubenswrapper[5110]: I0126 00:30:56.663221 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="61968fab-ac43-43ab-97ea-814704095718" containerName="docker-build" Jan 26 00:30:56 crc kubenswrapper[5110]: I0126 00:30:56.663231 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="61968fab-ac43-43ab-97ea-814704095718" containerName="docker-build" Jan 26 00:30:56 crc kubenswrapper[5110]: I0126 00:30:56.663388 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="c184d7fd-c527-4955-920d-d058efb87466" containerName="oc" Jan 26 00:30:56 crc kubenswrapper[5110]: I0126 00:30:56.663419 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="f8a53e4b-a3e6-41d7-9b22-3ee719d1e65c" containerName="collect-profiles" Jan 26 00:30:56 crc kubenswrapper[5110]: I0126 00:30:56.663433 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="61968fab-ac43-43ab-97ea-814704095718" containerName="docker-build" Jan 26 00:30:56 crc kubenswrapper[5110]: I0126 00:30:56.678957 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-mw7kl/must-gather-hssx4"] Jan 26 00:30:56 crc kubenswrapper[5110]: I0126 00:30:56.679135 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mw7kl/must-gather-hssx4" Jan 26 00:30:56 crc kubenswrapper[5110]: I0126 00:30:56.681532 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-mw7kl\"/\"kube-root-ca.crt\"" Jan 26 00:30:56 crc kubenswrapper[5110]: I0126 00:30:56.681918 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-mw7kl\"/\"default-dockercfg-ftpdl\"" Jan 26 00:30:56 crc kubenswrapper[5110]: I0126 00:30:56.682632 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-mw7kl\"/\"openshift-service-ca.crt\"" Jan 26 00:30:56 crc kubenswrapper[5110]: I0126 00:30:56.791278 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9q68\" (UniqueName: \"kubernetes.io/projected/40af9024-688a-433b-9bd3-1052721377d5-kube-api-access-d9q68\") pod \"must-gather-hssx4\" (UID: \"40af9024-688a-433b-9bd3-1052721377d5\") " pod="openshift-must-gather-mw7kl/must-gather-hssx4" Jan 26 00:30:56 crc kubenswrapper[5110]: I0126 00:30:56.791361 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/40af9024-688a-433b-9bd3-1052721377d5-must-gather-output\") pod \"must-gather-hssx4\" (UID: \"40af9024-688a-433b-9bd3-1052721377d5\") " pod="openshift-must-gather-mw7kl/must-gather-hssx4" Jan 26 00:30:56 crc kubenswrapper[5110]: I0126 00:30:56.813106 5110 patch_prober.go:28] interesting pod/machine-config-daemon-c6tpr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:30:56 crc kubenswrapper[5110]: I0126 00:30:56.813194 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" podUID="f15bed73-d669-439f-9828-7b952d9bfe65" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:30:56 crc kubenswrapper[5110]: I0126 00:30:56.892722 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d9q68\" (UniqueName: \"kubernetes.io/projected/40af9024-688a-433b-9bd3-1052721377d5-kube-api-access-d9q68\") pod \"must-gather-hssx4\" (UID: \"40af9024-688a-433b-9bd3-1052721377d5\") " pod="openshift-must-gather-mw7kl/must-gather-hssx4" Jan 26 00:30:56 crc kubenswrapper[5110]: I0126 00:30:56.892907 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/40af9024-688a-433b-9bd3-1052721377d5-must-gather-output\") pod \"must-gather-hssx4\" (UID: \"40af9024-688a-433b-9bd3-1052721377d5\") " pod="openshift-must-gather-mw7kl/must-gather-hssx4" Jan 26 00:30:56 crc kubenswrapper[5110]: I0126 00:30:56.893424 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/40af9024-688a-433b-9bd3-1052721377d5-must-gather-output\") pod \"must-gather-hssx4\" (UID: \"40af9024-688a-433b-9bd3-1052721377d5\") " pod="openshift-must-gather-mw7kl/must-gather-hssx4" Jan 26 00:30:56 crc kubenswrapper[5110]: I0126 00:30:56.933221 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9q68\" (UniqueName: \"kubernetes.io/projected/40af9024-688a-433b-9bd3-1052721377d5-kube-api-access-d9q68\") pod \"must-gather-hssx4\" (UID: \"40af9024-688a-433b-9bd3-1052721377d5\") " pod="openshift-must-gather-mw7kl/must-gather-hssx4" Jan 26 00:30:57 crc kubenswrapper[5110]: I0126 00:30:57.001231 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mw7kl/must-gather-hssx4" Jan 26 00:30:57 crc kubenswrapper[5110]: I0126 00:30:57.207463 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-mw7kl/must-gather-hssx4"] Jan 26 00:30:57 crc kubenswrapper[5110]: I0126 00:30:57.223923 5110 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 00:30:58 crc kubenswrapper[5110]: I0126 00:30:58.217335 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mw7kl/must-gather-hssx4" event={"ID":"40af9024-688a-433b-9bd3-1052721377d5","Type":"ContainerStarted","Data":"4e9701b981ee4c946cd374ed4b52b9606a6c88b4b6a4a1e0fbab3040653ae45c"} Jan 26 00:31:03 crc kubenswrapper[5110]: I0126 00:31:03.258012 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mw7kl/must-gather-hssx4" event={"ID":"40af9024-688a-433b-9bd3-1052721377d5","Type":"ContainerStarted","Data":"78f410892fd489f80c13049f46e722583e2daa64c316cb6e3d39d8eba6f85151"} Jan 26 00:31:03 crc kubenswrapper[5110]: I0126 00:31:03.258972 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mw7kl/must-gather-hssx4" event={"ID":"40af9024-688a-433b-9bd3-1052721377d5","Type":"ContainerStarted","Data":"f236589ecf34a68e38149876a23afa21c43b2d1fe7615c2f54c6a74743530b6e"} Jan 26 00:31:03 crc kubenswrapper[5110]: I0126 00:31:03.278169 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-mw7kl/must-gather-hssx4" podStartSLOduration=2.240075612 podStartE2EDuration="7.278146796s" podCreationTimestamp="2026-01-26 00:30:56 +0000 UTC" firstStartedPulling="2026-01-26 00:30:57.224153405 +0000 UTC m=+1314.453052014" lastFinishedPulling="2026-01-26 00:31:02.262224589 +0000 UTC m=+1319.491123198" observedRunningTime="2026-01-26 00:31:03.273199147 +0000 UTC m=+1320.502097756" watchObservedRunningTime="2026-01-26 00:31:03.278146796 +0000 UTC m=+1320.507045405" Jan 26 00:31:26 crc kubenswrapper[5110]: I0126 00:31:26.816399 5110 patch_prober.go:28] interesting pod/machine-config-daemon-c6tpr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:31:26 crc kubenswrapper[5110]: I0126 00:31:26.817429 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" podUID="f15bed73-d669-439f-9828-7b952d9bfe65" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:31:26 crc kubenswrapper[5110]: I0126 00:31:26.817492 5110 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" Jan 26 00:31:26 crc kubenswrapper[5110]: I0126 00:31:26.818279 5110 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6378941edf2a5c6b68ade18a39cf48e1e5d7713758a77cf9b8edc9cdb2b220a2"} pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 00:31:26 crc kubenswrapper[5110]: I0126 00:31:26.818438 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" podUID="f15bed73-d669-439f-9828-7b952d9bfe65" containerName="machine-config-daemon" containerID="cri-o://6378941edf2a5c6b68ade18a39cf48e1e5d7713758a77cf9b8edc9cdb2b220a2" gracePeriod=600 Jan 26 00:31:27 crc kubenswrapper[5110]: I0126 00:31:27.444069 5110 generic.go:358] "Generic (PLEG): container finished" podID="f15bed73-d669-439f-9828-7b952d9bfe65" containerID="6378941edf2a5c6b68ade18a39cf48e1e5d7713758a77cf9b8edc9cdb2b220a2" exitCode=0 Jan 26 00:31:27 crc kubenswrapper[5110]: I0126 00:31:27.444164 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" event={"ID":"f15bed73-d669-439f-9828-7b952d9bfe65","Type":"ContainerDied","Data":"6378941edf2a5c6b68ade18a39cf48e1e5d7713758a77cf9b8edc9cdb2b220a2"} Jan 26 00:31:27 crc kubenswrapper[5110]: I0126 00:31:27.444754 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" event={"ID":"f15bed73-d669-439f-9828-7b952d9bfe65","Type":"ContainerStarted","Data":"538b080a7f2fe02cd7dffd8adbbbd4bde5573141b47811cea8a3b724e939206d"} Jan 26 00:31:27 crc kubenswrapper[5110]: I0126 00:31:27.444790 5110 scope.go:117] "RemoveContainer" containerID="85c836c21afd57f854e547aa21c8b990ebf524babd5afa313958f93419d014e7" Jan 26 00:31:50 crc kubenswrapper[5110]: I0126 00:31:50.277209 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-75ffdb6fcd-crnjv_77aa983a-f3c1-4799-a84b-c3c7a381a1bc/control-plane-machine-set-operator/0.log" Jan 26 00:31:50 crc kubenswrapper[5110]: I0126 00:31:50.461516 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-jptld_10d2b7d6-0a18-4f5c-b3b8-2c6f34e88730/kube-rbac-proxy/0.log" Jan 26 00:31:50 crc kubenswrapper[5110]: I0126 00:31:50.468772 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-jptld_10d2b7d6-0a18-4f5c-b3b8-2c6f34e88730/machine-api-operator/0.log" Jan 26 00:32:00 crc kubenswrapper[5110]: I0126 00:32:00.141404 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489792-vt2hd"] Jan 26 00:32:00 crc kubenswrapper[5110]: I0126 00:32:00.147889 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489792-vt2hd" Jan 26 00:32:00 crc kubenswrapper[5110]: I0126 00:32:00.152960 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489792-vt2hd"] Jan 26 00:32:00 crc kubenswrapper[5110]: I0126 00:32:00.157541 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dfffg\"" Jan 26 00:32:00 crc kubenswrapper[5110]: I0126 00:32:00.157574 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:32:00 crc kubenswrapper[5110]: I0126 00:32:00.157968 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:32:00 crc kubenswrapper[5110]: I0126 00:32:00.333718 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdlsh\" (UniqueName: \"kubernetes.io/projected/fd96632d-dde9-4392-871b-46dbae0d5788-kube-api-access-mdlsh\") pod \"auto-csr-approver-29489792-vt2hd\" (UID: \"fd96632d-dde9-4392-871b-46dbae0d5788\") " pod="openshift-infra/auto-csr-approver-29489792-vt2hd" Jan 26 00:32:00 crc kubenswrapper[5110]: I0126 00:32:00.435109 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mdlsh\" (UniqueName: \"kubernetes.io/projected/fd96632d-dde9-4392-871b-46dbae0d5788-kube-api-access-mdlsh\") pod \"auto-csr-approver-29489792-vt2hd\" (UID: \"fd96632d-dde9-4392-871b-46dbae0d5788\") " pod="openshift-infra/auto-csr-approver-29489792-vt2hd" Jan 26 00:32:00 crc kubenswrapper[5110]: I0126 00:32:00.478391 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdlsh\" (UniqueName: \"kubernetes.io/projected/fd96632d-dde9-4392-871b-46dbae0d5788-kube-api-access-mdlsh\") pod \"auto-csr-approver-29489792-vt2hd\" (UID: \"fd96632d-dde9-4392-871b-46dbae0d5788\") " pod="openshift-infra/auto-csr-approver-29489792-vt2hd" Jan 26 00:32:00 crc kubenswrapper[5110]: I0126 00:32:00.483365 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489792-vt2hd" Jan 26 00:32:00 crc kubenswrapper[5110]: I0126 00:32:00.760992 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489792-vt2hd"] Jan 26 00:32:01 crc kubenswrapper[5110]: I0126 00:32:01.736374 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489792-vt2hd" event={"ID":"fd96632d-dde9-4392-871b-46dbae0d5788","Type":"ContainerStarted","Data":"ede1949d3ae03acb7d2a1edf9246bb008ed333188b40224b85eaaf22df77bed9"} Jan 26 00:32:02 crc kubenswrapper[5110]: I0126 00:32:02.748670 5110 generic.go:358] "Generic (PLEG): container finished" podID="fd96632d-dde9-4392-871b-46dbae0d5788" containerID="74290f617beb267f4e5095f8769ba135f23725f5e811350cb89931c77ef6d567" exitCode=0 Jan 26 00:32:02 crc kubenswrapper[5110]: I0126 00:32:02.748760 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489792-vt2hd" event={"ID":"fd96632d-dde9-4392-871b-46dbae0d5788","Type":"ContainerDied","Data":"74290f617beb267f4e5095f8769ba135f23725f5e811350cb89931c77ef6d567"} Jan 26 00:32:04 crc kubenswrapper[5110]: I0126 00:32:04.021971 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489792-vt2hd" Jan 26 00:32:04 crc kubenswrapper[5110]: I0126 00:32:04.089382 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdlsh\" (UniqueName: \"kubernetes.io/projected/fd96632d-dde9-4392-871b-46dbae0d5788-kube-api-access-mdlsh\") pod \"fd96632d-dde9-4392-871b-46dbae0d5788\" (UID: \"fd96632d-dde9-4392-871b-46dbae0d5788\") " Jan 26 00:32:04 crc kubenswrapper[5110]: I0126 00:32:04.115789 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd96632d-dde9-4392-871b-46dbae0d5788-kube-api-access-mdlsh" (OuterVolumeSpecName: "kube-api-access-mdlsh") pod "fd96632d-dde9-4392-871b-46dbae0d5788" (UID: "fd96632d-dde9-4392-871b-46dbae0d5788"). InnerVolumeSpecName "kube-api-access-mdlsh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:32:04 crc kubenswrapper[5110]: I0126 00:32:04.191175 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mdlsh\" (UniqueName: \"kubernetes.io/projected/fd96632d-dde9-4392-871b-46dbae0d5788-kube-api-access-mdlsh\") on node \"crc\" DevicePath \"\"" Jan 26 00:32:04 crc kubenswrapper[5110]: I0126 00:32:04.410679 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858d87f86b-ttxf9_d06bb016-995f-4bbb-a490-0b336707d443/cert-manager-controller/0.log" Jan 26 00:32:04 crc kubenswrapper[5110]: I0126 00:32:04.540861 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7dbf76d5c8-58c5k_34d875fe-deae-45f9-8d5b-6e28c4b6138a/cert-manager-cainjector/0.log" Jan 26 00:32:04 crc kubenswrapper[5110]: I0126 00:32:04.605537 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-7894b5b9b4-zptcb_63a94427-5376-447b-af64-fbf2d9576a40/cert-manager-webhook/0.log" Jan 26 00:32:04 crc kubenswrapper[5110]: I0126 00:32:04.769355 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489792-vt2hd" event={"ID":"fd96632d-dde9-4392-871b-46dbae0d5788","Type":"ContainerDied","Data":"ede1949d3ae03acb7d2a1edf9246bb008ed333188b40224b85eaaf22df77bed9"} Jan 26 00:32:04 crc kubenswrapper[5110]: I0126 00:32:04.769415 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ede1949d3ae03acb7d2a1edf9246bb008ed333188b40224b85eaaf22df77bed9" Jan 26 00:32:04 crc kubenswrapper[5110]: I0126 00:32:04.769503 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489792-vt2hd" Jan 26 00:32:05 crc kubenswrapper[5110]: I0126 00:32:05.088336 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29489786-25xw5"] Jan 26 00:32:05 crc kubenswrapper[5110]: I0126 00:32:05.095238 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29489786-25xw5"] Jan 26 00:32:05 crc kubenswrapper[5110]: I0126 00:32:05.325713 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9242d7cd-65e0-49e7-9d89-dad7e4b1f8ec" path="/var/lib/kubelet/pods/9242d7cd-65e0-49e7-9d89-dad7e4b1f8ec/volumes" Jan 26 00:32:17 crc kubenswrapper[5110]: I0126 00:32:17.080656 5110 scope.go:117] "RemoveContainer" containerID="56f1b3738e12c88ec58398333d5aef5efd8960491746aa1b41af72c5e09afefe" Jan 26 00:32:19 crc kubenswrapper[5110]: I0126 00:32:19.632603 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-hjwfj_47b88cf6-8ce3-4593-b450-4a0b6ac95908/prometheus-operator/0.log" Jan 26 00:32:19 crc kubenswrapper[5110]: I0126 00:32:19.813045 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-898766f4b-5sc47_85376b96-d97e-4b5e-9bb0-9a931610c0ec/prometheus-operator-admission-webhook/0.log" Jan 26 00:32:19 crc kubenswrapper[5110]: I0126 00:32:19.861858 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-898766f4b-7m2dg_7cc0b3a9-bae1-46d7-974b-da9bd9c524e4/prometheus-operator-admission-webhook/0.log" Jan 26 00:32:20 crc kubenswrapper[5110]: I0126 00:32:20.001267 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-jr7zr_18496abe-bb01-4ce4-a79c-35bc522ec58d/operator/0.log" Jan 26 00:32:20 crc kubenswrapper[5110]: I0126 00:32:20.079991 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-5jr67_e5dbd874-cd23-4d81-92b1-7bc9c9109e2c/perses-operator/0.log" Jan 26 00:32:36 crc kubenswrapper[5110]: I0126 00:32:36.400832 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzl9z_857dd7a1-3d35-47a1-b2fa-5bcee0265262/util/0.log" Jan 26 00:32:36 crc kubenswrapper[5110]: I0126 00:32:36.590522 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzl9z_857dd7a1-3d35-47a1-b2fa-5bcee0265262/util/0.log" Jan 26 00:32:36 crc kubenswrapper[5110]: I0126 00:32:36.636515 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzl9z_857dd7a1-3d35-47a1-b2fa-5bcee0265262/pull/0.log" Jan 26 00:32:36 crc kubenswrapper[5110]: I0126 00:32:36.672278 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzl9z_857dd7a1-3d35-47a1-b2fa-5bcee0265262/pull/0.log" Jan 26 00:32:36 crc kubenswrapper[5110]: I0126 00:32:36.792097 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzl9z_857dd7a1-3d35-47a1-b2fa-5bcee0265262/util/0.log" Jan 26 00:32:36 crc kubenswrapper[5110]: I0126 00:32:36.828049 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzl9z_857dd7a1-3d35-47a1-b2fa-5bcee0265262/extract/0.log" Jan 26 00:32:36 crc kubenswrapper[5110]: I0126 00:32:36.872755 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931arzl9z_857dd7a1-3d35-47a1-b2fa-5bcee0265262/pull/0.log" Jan 26 00:32:37 crc kubenswrapper[5110]: I0126 00:32:37.069497 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fh4pfh_2711e85a-af93-43d8-8e4a-d6b92be4f574/util/0.log" Jan 26 00:32:37 crc kubenswrapper[5110]: I0126 00:32:37.293763 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fh4pfh_2711e85a-af93-43d8-8e4a-d6b92be4f574/pull/0.log" Jan 26 00:32:37 crc kubenswrapper[5110]: I0126 00:32:37.307922 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fh4pfh_2711e85a-af93-43d8-8e4a-d6b92be4f574/pull/0.log" Jan 26 00:32:37 crc kubenswrapper[5110]: I0126 00:32:37.308143 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fh4pfh_2711e85a-af93-43d8-8e4a-d6b92be4f574/util/0.log" Jan 26 00:32:37 crc kubenswrapper[5110]: I0126 00:32:37.485497 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fh4pfh_2711e85a-af93-43d8-8e4a-d6b92be4f574/pull/0.log" Jan 26 00:32:37 crc kubenswrapper[5110]: I0126 00:32:37.505202 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fh4pfh_2711e85a-af93-43d8-8e4a-d6b92be4f574/util/0.log" Jan 26 00:32:37 crc kubenswrapper[5110]: I0126 00:32:37.528646 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fh4pfh_2711e85a-af93-43d8-8e4a-d6b92be4f574/extract/0.log" Jan 26 00:32:37 crc kubenswrapper[5110]: I0126 00:32:37.682830 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egjbtk_d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc/util/0.log" Jan 26 00:32:37 crc kubenswrapper[5110]: I0126 00:32:37.889145 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egjbtk_d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc/pull/0.log" Jan 26 00:32:37 crc kubenswrapper[5110]: I0126 00:32:37.928014 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egjbtk_d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc/pull/0.log" Jan 26 00:32:37 crc kubenswrapper[5110]: I0126 00:32:37.944151 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egjbtk_d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc/util/0.log" Jan 26 00:32:38 crc kubenswrapper[5110]: I0126 00:32:38.134953 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egjbtk_d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc/util/0.log" Jan 26 00:32:38 crc kubenswrapper[5110]: I0126 00:32:38.145091 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egjbtk_d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc/pull/0.log" Jan 26 00:32:38 crc kubenswrapper[5110]: I0126 00:32:38.159783 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5egjbtk_d7c3c4b9-4dab-446c-8d0a-843bf2e3a6bc/extract/0.log" Jan 26 00:32:38 crc kubenswrapper[5110]: I0126 00:32:38.351168 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qs2ww_f8b123a0-9bf2-4b5f-b26d-14407c464561/util/0.log" Jan 26 00:32:38 crc kubenswrapper[5110]: I0126 00:32:38.510714 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qs2ww_f8b123a0-9bf2-4b5f-b26d-14407c464561/pull/0.log" Jan 26 00:32:38 crc kubenswrapper[5110]: I0126 00:32:38.516442 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qs2ww_f8b123a0-9bf2-4b5f-b26d-14407c464561/pull/0.log" Jan 26 00:32:38 crc kubenswrapper[5110]: I0126 00:32:38.574054 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qs2ww_f8b123a0-9bf2-4b5f-b26d-14407c464561/util/0.log" Jan 26 00:32:38 crc kubenswrapper[5110]: I0126 00:32:38.746250 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qs2ww_f8b123a0-9bf2-4b5f-b26d-14407c464561/pull/0.log" Jan 26 00:32:38 crc kubenswrapper[5110]: I0126 00:32:38.757319 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qs2ww_f8b123a0-9bf2-4b5f-b26d-14407c464561/util/0.log" Jan 26 00:32:38 crc kubenswrapper[5110]: I0126 00:32:38.772472 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qs2ww_f8b123a0-9bf2-4b5f-b26d-14407c464561/extract/0.log" Jan 26 00:32:38 crc kubenswrapper[5110]: I0126 00:32:38.990973 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2zd9j_2268b8e4-3c7b-4c92-9430-4e4514b2f3c1/extract-utilities/0.log" Jan 26 00:32:39 crc kubenswrapper[5110]: I0126 00:32:39.133135 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2zd9j_2268b8e4-3c7b-4c92-9430-4e4514b2f3c1/extract-content/0.log" Jan 26 00:32:39 crc kubenswrapper[5110]: I0126 00:32:39.189522 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2zd9j_2268b8e4-3c7b-4c92-9430-4e4514b2f3c1/extract-content/0.log" Jan 26 00:32:39 crc kubenswrapper[5110]: I0126 00:32:39.196912 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2zd9j_2268b8e4-3c7b-4c92-9430-4e4514b2f3c1/extract-utilities/0.log" Jan 26 00:32:39 crc kubenswrapper[5110]: I0126 00:32:39.376981 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2zd9j_2268b8e4-3c7b-4c92-9430-4e4514b2f3c1/extract-content/0.log" Jan 26 00:32:39 crc kubenswrapper[5110]: I0126 00:32:39.417562 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2zd9j_2268b8e4-3c7b-4c92-9430-4e4514b2f3c1/extract-utilities/0.log" Jan 26 00:32:39 crc kubenswrapper[5110]: I0126 00:32:39.521967 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2zd9j_2268b8e4-3c7b-4c92-9430-4e4514b2f3c1/registry-server/0.log" Jan 26 00:32:39 crc kubenswrapper[5110]: I0126 00:32:39.534903 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2984n_50db59ae-4cff-430a-875e-9d7310641e25/extract-utilities/0.log" Jan 26 00:32:39 crc kubenswrapper[5110]: I0126 00:32:39.686369 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2984n_50db59ae-4cff-430a-875e-9d7310641e25/extract-utilities/0.log" Jan 26 00:32:39 crc kubenswrapper[5110]: I0126 00:32:39.693048 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2984n_50db59ae-4cff-430a-875e-9d7310641e25/extract-content/0.log" Jan 26 00:32:39 crc kubenswrapper[5110]: I0126 00:32:39.695746 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2984n_50db59ae-4cff-430a-875e-9d7310641e25/extract-content/0.log" Jan 26 00:32:39 crc kubenswrapper[5110]: I0126 00:32:39.883987 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2984n_50db59ae-4cff-430a-875e-9d7310641e25/extract-utilities/0.log" Jan 26 00:32:39 crc kubenswrapper[5110]: I0126 00:32:39.917492 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2984n_50db59ae-4cff-430a-875e-9d7310641e25/extract-content/0.log" Jan 26 00:32:39 crc kubenswrapper[5110]: I0126 00:32:39.950278 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-25lrq_a1ada370-69b1-4b43-9a4b-95006bc2f1c7/marketplace-operator/0.log" Jan 26 00:32:40 crc kubenswrapper[5110]: I0126 00:32:40.146289 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zwn7c_9b9d67c0-e0dd-4db3-85d5-958a460183a3/extract-utilities/0.log" Jan 26 00:32:40 crc kubenswrapper[5110]: I0126 00:32:40.155064 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2984n_50db59ae-4cff-430a-875e-9d7310641e25/registry-server/0.log" Jan 26 00:32:40 crc kubenswrapper[5110]: I0126 00:32:40.346906 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zwn7c_9b9d67c0-e0dd-4db3-85d5-958a460183a3/extract-utilities/0.log" Jan 26 00:32:40 crc kubenswrapper[5110]: I0126 00:32:40.367430 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zwn7c_9b9d67c0-e0dd-4db3-85d5-958a460183a3/extract-content/0.log" Jan 26 00:32:40 crc kubenswrapper[5110]: I0126 00:32:40.394624 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zwn7c_9b9d67c0-e0dd-4db3-85d5-958a460183a3/extract-content/0.log" Jan 26 00:32:40 crc kubenswrapper[5110]: I0126 00:32:40.593106 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zwn7c_9b9d67c0-e0dd-4db3-85d5-958a460183a3/extract-utilities/0.log" Jan 26 00:32:40 crc kubenswrapper[5110]: I0126 00:32:40.593482 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zwn7c_9b9d67c0-e0dd-4db3-85d5-958a460183a3/extract-content/0.log" Jan 26 00:32:40 crc kubenswrapper[5110]: I0126 00:32:40.714724 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zwn7c_9b9d67c0-e0dd-4db3-85d5-958a460183a3/registry-server/0.log" Jan 26 00:32:43 crc kubenswrapper[5110]: I0126 00:32:43.847464 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fdn97"] Jan 26 00:32:43 crc kubenswrapper[5110]: I0126 00:32:43.848700 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fd96632d-dde9-4392-871b-46dbae0d5788" containerName="oc" Jan 26 00:32:43 crc kubenswrapper[5110]: I0126 00:32:43.848718 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd96632d-dde9-4392-871b-46dbae0d5788" containerName="oc" Jan 26 00:32:43 crc kubenswrapper[5110]: I0126 00:32:43.848892 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="fd96632d-dde9-4392-871b-46dbae0d5788" containerName="oc" Jan 26 00:32:43 crc kubenswrapper[5110]: I0126 00:32:43.996635 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fdn97"] Jan 26 00:32:43 crc kubenswrapper[5110]: I0126 00:32:43.996873 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fdn97" Jan 26 00:32:44 crc kubenswrapper[5110]: I0126 00:32:44.122879 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76168dac-acdb-4e23-b0b3-22e061b8f730-utilities\") pod \"certified-operators-fdn97\" (UID: \"76168dac-acdb-4e23-b0b3-22e061b8f730\") " pod="openshift-marketplace/certified-operators-fdn97" Jan 26 00:32:44 crc kubenswrapper[5110]: I0126 00:32:44.122998 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76168dac-acdb-4e23-b0b3-22e061b8f730-catalog-content\") pod \"certified-operators-fdn97\" (UID: \"76168dac-acdb-4e23-b0b3-22e061b8f730\") " pod="openshift-marketplace/certified-operators-fdn97" Jan 26 00:32:44 crc kubenswrapper[5110]: I0126 00:32:44.123057 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wpnh\" (UniqueName: \"kubernetes.io/projected/76168dac-acdb-4e23-b0b3-22e061b8f730-kube-api-access-6wpnh\") pod \"certified-operators-fdn97\" (UID: \"76168dac-acdb-4e23-b0b3-22e061b8f730\") " pod="openshift-marketplace/certified-operators-fdn97" Jan 26 00:32:44 crc kubenswrapper[5110]: I0126 00:32:44.223982 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76168dac-acdb-4e23-b0b3-22e061b8f730-catalog-content\") pod \"certified-operators-fdn97\" (UID: \"76168dac-acdb-4e23-b0b3-22e061b8f730\") " pod="openshift-marketplace/certified-operators-fdn97" Jan 26 00:32:44 crc kubenswrapper[5110]: I0126 00:32:44.224789 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6wpnh\" (UniqueName: \"kubernetes.io/projected/76168dac-acdb-4e23-b0b3-22e061b8f730-kube-api-access-6wpnh\") pod \"certified-operators-fdn97\" (UID: \"76168dac-acdb-4e23-b0b3-22e061b8f730\") " pod="openshift-marketplace/certified-operators-fdn97" Jan 26 00:32:44 crc kubenswrapper[5110]: I0126 00:32:44.224820 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76168dac-acdb-4e23-b0b3-22e061b8f730-utilities\") pod \"certified-operators-fdn97\" (UID: \"76168dac-acdb-4e23-b0b3-22e061b8f730\") " pod="openshift-marketplace/certified-operators-fdn97" Jan 26 00:32:44 crc kubenswrapper[5110]: I0126 00:32:44.224673 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76168dac-acdb-4e23-b0b3-22e061b8f730-catalog-content\") pod \"certified-operators-fdn97\" (UID: \"76168dac-acdb-4e23-b0b3-22e061b8f730\") " pod="openshift-marketplace/certified-operators-fdn97" Jan 26 00:32:44 crc kubenswrapper[5110]: I0126 00:32:44.225240 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76168dac-acdb-4e23-b0b3-22e061b8f730-utilities\") pod \"certified-operators-fdn97\" (UID: \"76168dac-acdb-4e23-b0b3-22e061b8f730\") " pod="openshift-marketplace/certified-operators-fdn97" Jan 26 00:32:44 crc kubenswrapper[5110]: I0126 00:32:44.251564 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wpnh\" (UniqueName: \"kubernetes.io/projected/76168dac-acdb-4e23-b0b3-22e061b8f730-kube-api-access-6wpnh\") pod \"certified-operators-fdn97\" (UID: \"76168dac-acdb-4e23-b0b3-22e061b8f730\") " pod="openshift-marketplace/certified-operators-fdn97" Jan 26 00:32:44 crc kubenswrapper[5110]: I0126 00:32:44.316284 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fdn97" Jan 26 00:32:44 crc kubenswrapper[5110]: I0126 00:32:44.528398 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fdn97"] Jan 26 00:32:45 crc kubenswrapper[5110]: I0126 00:32:45.183418 5110 generic.go:358] "Generic (PLEG): container finished" podID="76168dac-acdb-4e23-b0b3-22e061b8f730" containerID="fa12a4fa93a5c6699fe54ab452c5a1f6448518bbd3e052d2e7d7091e8e14ea0a" exitCode=0 Jan 26 00:32:45 crc kubenswrapper[5110]: I0126 00:32:45.183692 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdn97" event={"ID":"76168dac-acdb-4e23-b0b3-22e061b8f730","Type":"ContainerDied","Data":"fa12a4fa93a5c6699fe54ab452c5a1f6448518bbd3e052d2e7d7091e8e14ea0a"} Jan 26 00:32:45 crc kubenswrapper[5110]: I0126 00:32:45.184088 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdn97" event={"ID":"76168dac-acdb-4e23-b0b3-22e061b8f730","Type":"ContainerStarted","Data":"1191404fee32370ce1b31cf589937bd72940467231996f8f82a5bff28225cc2a"} Jan 26 00:32:46 crc kubenswrapper[5110]: I0126 00:32:46.193629 5110 generic.go:358] "Generic (PLEG): container finished" podID="76168dac-acdb-4e23-b0b3-22e061b8f730" containerID="1b8a00d8b894aa052f83c077c52c67ccda41bde9f676d62daaa334bf199e4edd" exitCode=0 Jan 26 00:32:46 crc kubenswrapper[5110]: I0126 00:32:46.193937 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdn97" event={"ID":"76168dac-acdb-4e23-b0b3-22e061b8f730","Type":"ContainerDied","Data":"1b8a00d8b894aa052f83c077c52c67ccda41bde9f676d62daaa334bf199e4edd"} Jan 26 00:32:47 crc kubenswrapper[5110]: I0126 00:32:47.202909 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdn97" event={"ID":"76168dac-acdb-4e23-b0b3-22e061b8f730","Type":"ContainerStarted","Data":"98a810172dca89c55f96dbb438a8d754184b05d3622f0246c2b333d4e700587b"} Jan 26 00:32:47 crc kubenswrapper[5110]: I0126 00:32:47.221968 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fdn97" podStartSLOduration=3.562309924 podStartE2EDuration="4.221924259s" podCreationTimestamp="2026-01-26 00:32:43 +0000 UTC" firstStartedPulling="2026-01-26 00:32:45.185543486 +0000 UTC m=+1422.414442095" lastFinishedPulling="2026-01-26 00:32:45.845157821 +0000 UTC m=+1423.074056430" observedRunningTime="2026-01-26 00:32:47.220752316 +0000 UTC m=+1424.449650945" watchObservedRunningTime="2026-01-26 00:32:47.221924259 +0000 UTC m=+1424.450822868" Jan 26 00:32:54 crc kubenswrapper[5110]: I0126 00:32:54.276591 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-hjwfj_47b88cf6-8ce3-4593-b450-4a0b6ac95908/prometheus-operator/0.log" Jan 26 00:32:54 crc kubenswrapper[5110]: I0126 00:32:54.306411 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-898766f4b-7m2dg_7cc0b3a9-bae1-46d7-974b-da9bd9c524e4/prometheus-operator-admission-webhook/0.log" Jan 26 00:32:54 crc kubenswrapper[5110]: I0126 00:32:54.316480 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fdn97" Jan 26 00:32:54 crc kubenswrapper[5110]: I0126 00:32:54.316637 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-fdn97" Jan 26 00:32:54 crc kubenswrapper[5110]: I0126 00:32:54.336046 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-898766f4b-5sc47_85376b96-d97e-4b5e-9bb0-9a931610c0ec/prometheus-operator-admission-webhook/0.log" Jan 26 00:32:54 crc kubenswrapper[5110]: I0126 00:32:54.363265 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fdn97" Jan 26 00:32:54 crc kubenswrapper[5110]: I0126 00:32:54.447960 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-jr7zr_18496abe-bb01-4ce4-a79c-35bc522ec58d/operator/0.log" Jan 26 00:32:54 crc kubenswrapper[5110]: I0126 00:32:54.488778 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-5jr67_e5dbd874-cd23-4d81-92b1-7bc9c9109e2c/perses-operator/0.log" Jan 26 00:32:55 crc kubenswrapper[5110]: I0126 00:32:55.304888 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fdn97" Jan 26 00:32:55 crc kubenswrapper[5110]: I0126 00:32:55.376228 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fdn97"] Jan 26 00:32:57 crc kubenswrapper[5110]: I0126 00:32:57.283661 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fdn97" podUID="76168dac-acdb-4e23-b0b3-22e061b8f730" containerName="registry-server" containerID="cri-o://98a810172dca89c55f96dbb438a8d754184b05d3622f0246c2b333d4e700587b" gracePeriod=2 Jan 26 00:32:57 crc kubenswrapper[5110]: I0126 00:32:57.701308 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fdn97" Jan 26 00:32:57 crc kubenswrapper[5110]: I0126 00:32:57.737000 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wpnh\" (UniqueName: \"kubernetes.io/projected/76168dac-acdb-4e23-b0b3-22e061b8f730-kube-api-access-6wpnh\") pod \"76168dac-acdb-4e23-b0b3-22e061b8f730\" (UID: \"76168dac-acdb-4e23-b0b3-22e061b8f730\") " Jan 26 00:32:57 crc kubenswrapper[5110]: I0126 00:32:57.737201 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76168dac-acdb-4e23-b0b3-22e061b8f730-utilities\") pod \"76168dac-acdb-4e23-b0b3-22e061b8f730\" (UID: \"76168dac-acdb-4e23-b0b3-22e061b8f730\") " Jan 26 00:32:57 crc kubenswrapper[5110]: I0126 00:32:57.737261 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76168dac-acdb-4e23-b0b3-22e061b8f730-catalog-content\") pod \"76168dac-acdb-4e23-b0b3-22e061b8f730\" (UID: \"76168dac-acdb-4e23-b0b3-22e061b8f730\") " Jan 26 00:32:57 crc kubenswrapper[5110]: I0126 00:32:57.743737 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76168dac-acdb-4e23-b0b3-22e061b8f730-utilities" (OuterVolumeSpecName: "utilities") pod "76168dac-acdb-4e23-b0b3-22e061b8f730" (UID: "76168dac-acdb-4e23-b0b3-22e061b8f730"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:32:57 crc kubenswrapper[5110]: I0126 00:32:57.750490 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76168dac-acdb-4e23-b0b3-22e061b8f730-kube-api-access-6wpnh" (OuterVolumeSpecName: "kube-api-access-6wpnh") pod "76168dac-acdb-4e23-b0b3-22e061b8f730" (UID: "76168dac-acdb-4e23-b0b3-22e061b8f730"). InnerVolumeSpecName "kube-api-access-6wpnh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:32:57 crc kubenswrapper[5110]: I0126 00:32:57.774319 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76168dac-acdb-4e23-b0b3-22e061b8f730-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76168dac-acdb-4e23-b0b3-22e061b8f730" (UID: "76168dac-acdb-4e23-b0b3-22e061b8f730"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:32:57 crc kubenswrapper[5110]: I0126 00:32:57.838769 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76168dac-acdb-4e23-b0b3-22e061b8f730-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:32:57 crc kubenswrapper[5110]: I0126 00:32:57.838920 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6wpnh\" (UniqueName: \"kubernetes.io/projected/76168dac-acdb-4e23-b0b3-22e061b8f730-kube-api-access-6wpnh\") on node \"crc\" DevicePath \"\"" Jan 26 00:32:57 crc kubenswrapper[5110]: I0126 00:32:57.838939 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76168dac-acdb-4e23-b0b3-22e061b8f730-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:32:58 crc kubenswrapper[5110]: I0126 00:32:58.299384 5110 generic.go:358] "Generic (PLEG): container finished" podID="76168dac-acdb-4e23-b0b3-22e061b8f730" containerID="98a810172dca89c55f96dbb438a8d754184b05d3622f0246c2b333d4e700587b" exitCode=0 Jan 26 00:32:58 crc kubenswrapper[5110]: I0126 00:32:58.299522 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdn97" event={"ID":"76168dac-acdb-4e23-b0b3-22e061b8f730","Type":"ContainerDied","Data":"98a810172dca89c55f96dbb438a8d754184b05d3622f0246c2b333d4e700587b"} Jan 26 00:32:58 crc kubenswrapper[5110]: I0126 00:32:58.301496 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdn97" event={"ID":"76168dac-acdb-4e23-b0b3-22e061b8f730","Type":"ContainerDied","Data":"1191404fee32370ce1b31cf589937bd72940467231996f8f82a5bff28225cc2a"} Jan 26 00:32:58 crc kubenswrapper[5110]: I0126 00:32:58.301543 5110 scope.go:117] "RemoveContainer" containerID="98a810172dca89c55f96dbb438a8d754184b05d3622f0246c2b333d4e700587b" Jan 26 00:32:58 crc kubenswrapper[5110]: I0126 00:32:58.299581 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fdn97" Jan 26 00:32:58 crc kubenswrapper[5110]: I0126 00:32:58.357961 5110 scope.go:117] "RemoveContainer" containerID="1b8a00d8b894aa052f83c077c52c67ccda41bde9f676d62daaa334bf199e4edd" Jan 26 00:32:58 crc kubenswrapper[5110]: I0126 00:32:58.383293 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fdn97"] Jan 26 00:32:58 crc kubenswrapper[5110]: I0126 00:32:58.392730 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fdn97"] Jan 26 00:32:58 crc kubenswrapper[5110]: I0126 00:32:58.412909 5110 scope.go:117] "RemoveContainer" containerID="fa12a4fa93a5c6699fe54ab452c5a1f6448518bbd3e052d2e7d7091e8e14ea0a" Jan 26 00:32:58 crc kubenswrapper[5110]: I0126 00:32:58.434759 5110 scope.go:117] "RemoveContainer" containerID="98a810172dca89c55f96dbb438a8d754184b05d3622f0246c2b333d4e700587b" Jan 26 00:32:58 crc kubenswrapper[5110]: E0126 00:32:58.435422 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98a810172dca89c55f96dbb438a8d754184b05d3622f0246c2b333d4e700587b\": container with ID starting with 98a810172dca89c55f96dbb438a8d754184b05d3622f0246c2b333d4e700587b not found: ID does not exist" containerID="98a810172dca89c55f96dbb438a8d754184b05d3622f0246c2b333d4e700587b" Jan 26 00:32:58 crc kubenswrapper[5110]: I0126 00:32:58.435496 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98a810172dca89c55f96dbb438a8d754184b05d3622f0246c2b333d4e700587b"} err="failed to get container status \"98a810172dca89c55f96dbb438a8d754184b05d3622f0246c2b333d4e700587b\": rpc error: code = NotFound desc = could not find container \"98a810172dca89c55f96dbb438a8d754184b05d3622f0246c2b333d4e700587b\": container with ID starting with 98a810172dca89c55f96dbb438a8d754184b05d3622f0246c2b333d4e700587b not found: ID does not exist" Jan 26 00:32:58 crc kubenswrapper[5110]: I0126 00:32:58.435541 5110 scope.go:117] "RemoveContainer" containerID="1b8a00d8b894aa052f83c077c52c67ccda41bde9f676d62daaa334bf199e4edd" Jan 26 00:32:58 crc kubenswrapper[5110]: E0126 00:32:58.435923 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b8a00d8b894aa052f83c077c52c67ccda41bde9f676d62daaa334bf199e4edd\": container with ID starting with 1b8a00d8b894aa052f83c077c52c67ccda41bde9f676d62daaa334bf199e4edd not found: ID does not exist" containerID="1b8a00d8b894aa052f83c077c52c67ccda41bde9f676d62daaa334bf199e4edd" Jan 26 00:32:58 crc kubenswrapper[5110]: I0126 00:32:58.435963 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b8a00d8b894aa052f83c077c52c67ccda41bde9f676d62daaa334bf199e4edd"} err="failed to get container status \"1b8a00d8b894aa052f83c077c52c67ccda41bde9f676d62daaa334bf199e4edd\": rpc error: code = NotFound desc = could not find container \"1b8a00d8b894aa052f83c077c52c67ccda41bde9f676d62daaa334bf199e4edd\": container with ID starting with 1b8a00d8b894aa052f83c077c52c67ccda41bde9f676d62daaa334bf199e4edd not found: ID does not exist" Jan 26 00:32:58 crc kubenswrapper[5110]: I0126 00:32:58.435982 5110 scope.go:117] "RemoveContainer" containerID="fa12a4fa93a5c6699fe54ab452c5a1f6448518bbd3e052d2e7d7091e8e14ea0a" Jan 26 00:32:58 crc kubenswrapper[5110]: E0126 00:32:58.436230 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa12a4fa93a5c6699fe54ab452c5a1f6448518bbd3e052d2e7d7091e8e14ea0a\": container with ID starting with fa12a4fa93a5c6699fe54ab452c5a1f6448518bbd3e052d2e7d7091e8e14ea0a not found: ID does not exist" containerID="fa12a4fa93a5c6699fe54ab452c5a1f6448518bbd3e052d2e7d7091e8e14ea0a" Jan 26 00:32:58 crc kubenswrapper[5110]: I0126 00:32:58.436256 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa12a4fa93a5c6699fe54ab452c5a1f6448518bbd3e052d2e7d7091e8e14ea0a"} err="failed to get container status \"fa12a4fa93a5c6699fe54ab452c5a1f6448518bbd3e052d2e7d7091e8e14ea0a\": rpc error: code = NotFound desc = could not find container \"fa12a4fa93a5c6699fe54ab452c5a1f6448518bbd3e052d2e7d7091e8e14ea0a\": container with ID starting with fa12a4fa93a5c6699fe54ab452c5a1f6448518bbd3e052d2e7d7091e8e14ea0a not found: ID does not exist" Jan 26 00:32:59 crc kubenswrapper[5110]: I0126 00:32:59.328384 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76168dac-acdb-4e23-b0b3-22e061b8f730" path="/var/lib/kubelet/pods/76168dac-acdb-4e23-b0b3-22e061b8f730/volumes" Jan 26 00:33:41 crc kubenswrapper[5110]: I0126 00:33:41.686070 5110 generic.go:358] "Generic (PLEG): container finished" podID="40af9024-688a-433b-9bd3-1052721377d5" containerID="f236589ecf34a68e38149876a23afa21c43b2d1fe7615c2f54c6a74743530b6e" exitCode=0 Jan 26 00:33:41 crc kubenswrapper[5110]: I0126 00:33:41.686174 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mw7kl/must-gather-hssx4" event={"ID":"40af9024-688a-433b-9bd3-1052721377d5","Type":"ContainerDied","Data":"f236589ecf34a68e38149876a23afa21c43b2d1fe7615c2f54c6a74743530b6e"} Jan 26 00:33:41 crc kubenswrapper[5110]: I0126 00:33:41.687046 5110 scope.go:117] "RemoveContainer" containerID="f236589ecf34a68e38149876a23afa21c43b2d1fe7615c2f54c6a74743530b6e" Jan 26 00:33:42 crc kubenswrapper[5110]: I0126 00:33:42.518698 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mw7kl_must-gather-hssx4_40af9024-688a-433b-9bd3-1052721377d5/gather/0.log" Jan 26 00:33:48 crc kubenswrapper[5110]: I0126 00:33:48.718697 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mw7kl/must-gather-hssx4"] Jan 26 00:33:48 crc kubenswrapper[5110]: I0126 00:33:48.720155 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-mw7kl/must-gather-hssx4" podUID="40af9024-688a-433b-9bd3-1052721377d5" containerName="copy" containerID="cri-o://78f410892fd489f80c13049f46e722583e2daa64c316cb6e3d39d8eba6f85151" gracePeriod=2 Jan 26 00:33:48 crc kubenswrapper[5110]: I0126 00:33:48.723676 5110 status_manager.go:895] "Failed to get status for pod" podUID="40af9024-688a-433b-9bd3-1052721377d5" pod="openshift-must-gather-mw7kl/must-gather-hssx4" err="pods \"must-gather-hssx4\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-mw7kl\": no relationship found between node 'crc' and this object" Jan 26 00:33:48 crc kubenswrapper[5110]: I0126 00:33:48.731747 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mw7kl/must-gather-hssx4"] Jan 26 00:33:49 crc kubenswrapper[5110]: I0126 00:33:49.167514 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mw7kl_must-gather-hssx4_40af9024-688a-433b-9bd3-1052721377d5/copy/0.log" Jan 26 00:33:49 crc kubenswrapper[5110]: I0126 00:33:49.169013 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mw7kl/must-gather-hssx4" Jan 26 00:33:49 crc kubenswrapper[5110]: I0126 00:33:49.171132 5110 status_manager.go:895] "Failed to get status for pod" podUID="40af9024-688a-433b-9bd3-1052721377d5" pod="openshift-must-gather-mw7kl/must-gather-hssx4" err="pods \"must-gather-hssx4\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-mw7kl\": no relationship found between node 'crc' and this object" Jan 26 00:33:49 crc kubenswrapper[5110]: I0126 00:33:49.230000 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/40af9024-688a-433b-9bd3-1052721377d5-must-gather-output\") pod \"40af9024-688a-433b-9bd3-1052721377d5\" (UID: \"40af9024-688a-433b-9bd3-1052721377d5\") " Jan 26 00:33:49 crc kubenswrapper[5110]: I0126 00:33:49.230174 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9q68\" (UniqueName: \"kubernetes.io/projected/40af9024-688a-433b-9bd3-1052721377d5-kube-api-access-d9q68\") pod \"40af9024-688a-433b-9bd3-1052721377d5\" (UID: \"40af9024-688a-433b-9bd3-1052721377d5\") " Jan 26 00:33:49 crc kubenswrapper[5110]: I0126 00:33:49.241516 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40af9024-688a-433b-9bd3-1052721377d5-kube-api-access-d9q68" (OuterVolumeSpecName: "kube-api-access-d9q68") pod "40af9024-688a-433b-9bd3-1052721377d5" (UID: "40af9024-688a-433b-9bd3-1052721377d5"). InnerVolumeSpecName "kube-api-access-d9q68". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:33:49 crc kubenswrapper[5110]: I0126 00:33:49.283153 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40af9024-688a-433b-9bd3-1052721377d5-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "40af9024-688a-433b-9bd3-1052721377d5" (UID: "40af9024-688a-433b-9bd3-1052721377d5"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:33:49 crc kubenswrapper[5110]: I0126 00:33:49.329455 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40af9024-688a-433b-9bd3-1052721377d5" path="/var/lib/kubelet/pods/40af9024-688a-433b-9bd3-1052721377d5/volumes" Jan 26 00:33:49 crc kubenswrapper[5110]: I0126 00:33:49.331895 5110 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/40af9024-688a-433b-9bd3-1052721377d5-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 26 00:33:49 crc kubenswrapper[5110]: I0126 00:33:49.331933 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d9q68\" (UniqueName: \"kubernetes.io/projected/40af9024-688a-433b-9bd3-1052721377d5-kube-api-access-d9q68\") on node \"crc\" DevicePath \"\"" Jan 26 00:33:49 crc kubenswrapper[5110]: I0126 00:33:49.761208 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mw7kl_must-gather-hssx4_40af9024-688a-433b-9bd3-1052721377d5/copy/0.log" Jan 26 00:33:49 crc kubenswrapper[5110]: I0126 00:33:49.762980 5110 generic.go:358] "Generic (PLEG): container finished" podID="40af9024-688a-433b-9bd3-1052721377d5" containerID="78f410892fd489f80c13049f46e722583e2daa64c316cb6e3d39d8eba6f85151" exitCode=143 Jan 26 00:33:49 crc kubenswrapper[5110]: I0126 00:33:49.763105 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mw7kl/must-gather-hssx4" Jan 26 00:33:49 crc kubenswrapper[5110]: I0126 00:33:49.763114 5110 scope.go:117] "RemoveContainer" containerID="78f410892fd489f80c13049f46e722583e2daa64c316cb6e3d39d8eba6f85151" Jan 26 00:33:49 crc kubenswrapper[5110]: I0126 00:33:49.849153 5110 scope.go:117] "RemoveContainer" containerID="f236589ecf34a68e38149876a23afa21c43b2d1fe7615c2f54c6a74743530b6e" Jan 26 00:33:49 crc kubenswrapper[5110]: I0126 00:33:49.942722 5110 scope.go:117] "RemoveContainer" containerID="78f410892fd489f80c13049f46e722583e2daa64c316cb6e3d39d8eba6f85151" Jan 26 00:33:49 crc kubenswrapper[5110]: E0126 00:33:49.943166 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78f410892fd489f80c13049f46e722583e2daa64c316cb6e3d39d8eba6f85151\": container with ID starting with 78f410892fd489f80c13049f46e722583e2daa64c316cb6e3d39d8eba6f85151 not found: ID does not exist" containerID="78f410892fd489f80c13049f46e722583e2daa64c316cb6e3d39d8eba6f85151" Jan 26 00:33:49 crc kubenswrapper[5110]: I0126 00:33:49.943214 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78f410892fd489f80c13049f46e722583e2daa64c316cb6e3d39d8eba6f85151"} err="failed to get container status \"78f410892fd489f80c13049f46e722583e2daa64c316cb6e3d39d8eba6f85151\": rpc error: code = NotFound desc = could not find container \"78f410892fd489f80c13049f46e722583e2daa64c316cb6e3d39d8eba6f85151\": container with ID starting with 78f410892fd489f80c13049f46e722583e2daa64c316cb6e3d39d8eba6f85151 not found: ID does not exist" Jan 26 00:33:49 crc kubenswrapper[5110]: I0126 00:33:49.943241 5110 scope.go:117] "RemoveContainer" containerID="f236589ecf34a68e38149876a23afa21c43b2d1fe7615c2f54c6a74743530b6e" Jan 26 00:33:49 crc kubenswrapper[5110]: E0126 00:33:49.944027 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f236589ecf34a68e38149876a23afa21c43b2d1fe7615c2f54c6a74743530b6e\": container with ID starting with f236589ecf34a68e38149876a23afa21c43b2d1fe7615c2f54c6a74743530b6e not found: ID does not exist" containerID="f236589ecf34a68e38149876a23afa21c43b2d1fe7615c2f54c6a74743530b6e" Jan 26 00:33:49 crc kubenswrapper[5110]: I0126 00:33:49.944163 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f236589ecf34a68e38149876a23afa21c43b2d1fe7615c2f54c6a74743530b6e"} err="failed to get container status \"f236589ecf34a68e38149876a23afa21c43b2d1fe7615c2f54c6a74743530b6e\": rpc error: code = NotFound desc = could not find container \"f236589ecf34a68e38149876a23afa21c43b2d1fe7615c2f54c6a74743530b6e\": container with ID starting with f236589ecf34a68e38149876a23afa21c43b2d1fe7615c2f54c6a74743530b6e not found: ID does not exist" Jan 26 00:33:56 crc kubenswrapper[5110]: I0126 00:33:56.813076 5110 patch_prober.go:28] interesting pod/machine-config-daemon-c6tpr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:33:56 crc kubenswrapper[5110]: I0126 00:33:56.813932 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" podUID="f15bed73-d669-439f-9828-7b952d9bfe65" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:33:58 crc kubenswrapper[5110]: I0126 00:33:58.049370 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8p2lq"] Jan 26 00:33:58 crc kubenswrapper[5110]: I0126 00:33:58.050313 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="76168dac-acdb-4e23-b0b3-22e061b8f730" containerName="extract-utilities" Jan 26 00:33:58 crc kubenswrapper[5110]: I0126 00:33:58.050331 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="76168dac-acdb-4e23-b0b3-22e061b8f730" containerName="extract-utilities" Jan 26 00:33:58 crc kubenswrapper[5110]: I0126 00:33:58.050371 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="76168dac-acdb-4e23-b0b3-22e061b8f730" containerName="extract-content" Jan 26 00:33:58 crc kubenswrapper[5110]: I0126 00:33:58.050377 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="76168dac-acdb-4e23-b0b3-22e061b8f730" containerName="extract-content" Jan 26 00:33:58 crc kubenswrapper[5110]: I0126 00:33:58.050394 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="76168dac-acdb-4e23-b0b3-22e061b8f730" containerName="registry-server" Jan 26 00:33:58 crc kubenswrapper[5110]: I0126 00:33:58.050400 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="76168dac-acdb-4e23-b0b3-22e061b8f730" containerName="registry-server" Jan 26 00:33:58 crc kubenswrapper[5110]: I0126 00:33:58.050416 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="40af9024-688a-433b-9bd3-1052721377d5" containerName="gather" Jan 26 00:33:58 crc kubenswrapper[5110]: I0126 00:33:58.050425 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="40af9024-688a-433b-9bd3-1052721377d5" containerName="gather" Jan 26 00:33:58 crc kubenswrapper[5110]: I0126 00:33:58.050432 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="40af9024-688a-433b-9bd3-1052721377d5" containerName="copy" Jan 26 00:33:58 crc kubenswrapper[5110]: I0126 00:33:58.050438 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="40af9024-688a-433b-9bd3-1052721377d5" containerName="copy" Jan 26 00:33:58 crc kubenswrapper[5110]: I0126 00:33:58.050566 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="40af9024-688a-433b-9bd3-1052721377d5" containerName="copy" Jan 26 00:33:58 crc kubenswrapper[5110]: I0126 00:33:58.050587 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="40af9024-688a-433b-9bd3-1052721377d5" containerName="gather" Jan 26 00:33:58 crc kubenswrapper[5110]: I0126 00:33:58.050603 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="76168dac-acdb-4e23-b0b3-22e061b8f730" containerName="registry-server" Jan 26 00:33:58 crc kubenswrapper[5110]: I0126 00:33:58.061820 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8p2lq"] Jan 26 00:33:58 crc kubenswrapper[5110]: I0126 00:33:58.062003 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8p2lq" Jan 26 00:33:58 crc kubenswrapper[5110]: I0126 00:33:58.212009 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfac5a93-6b85-4bd5-bd4c-c07f116d1e31-catalog-content\") pod \"redhat-operators-8p2lq\" (UID: \"bfac5a93-6b85-4bd5-bd4c-c07f116d1e31\") " pod="openshift-marketplace/redhat-operators-8p2lq" Jan 26 00:33:58 crc kubenswrapper[5110]: I0126 00:33:58.212505 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76r4r\" (UniqueName: \"kubernetes.io/projected/bfac5a93-6b85-4bd5-bd4c-c07f116d1e31-kube-api-access-76r4r\") pod \"redhat-operators-8p2lq\" (UID: \"bfac5a93-6b85-4bd5-bd4c-c07f116d1e31\") " pod="openshift-marketplace/redhat-operators-8p2lq" Jan 26 00:33:58 crc kubenswrapper[5110]: I0126 00:33:58.212533 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfac5a93-6b85-4bd5-bd4c-c07f116d1e31-utilities\") pod \"redhat-operators-8p2lq\" (UID: \"bfac5a93-6b85-4bd5-bd4c-c07f116d1e31\") " pod="openshift-marketplace/redhat-operators-8p2lq" Jan 26 00:33:58 crc kubenswrapper[5110]: I0126 00:33:58.314125 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfac5a93-6b85-4bd5-bd4c-c07f116d1e31-catalog-content\") pod \"redhat-operators-8p2lq\" (UID: \"bfac5a93-6b85-4bd5-bd4c-c07f116d1e31\") " pod="openshift-marketplace/redhat-operators-8p2lq" Jan 26 00:33:58 crc kubenswrapper[5110]: I0126 00:33:58.314184 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-76r4r\" (UniqueName: \"kubernetes.io/projected/bfac5a93-6b85-4bd5-bd4c-c07f116d1e31-kube-api-access-76r4r\") pod \"redhat-operators-8p2lq\" (UID: \"bfac5a93-6b85-4bd5-bd4c-c07f116d1e31\") " pod="openshift-marketplace/redhat-operators-8p2lq" Jan 26 00:33:58 crc kubenswrapper[5110]: I0126 00:33:58.314220 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfac5a93-6b85-4bd5-bd4c-c07f116d1e31-utilities\") pod \"redhat-operators-8p2lq\" (UID: \"bfac5a93-6b85-4bd5-bd4c-c07f116d1e31\") " pod="openshift-marketplace/redhat-operators-8p2lq" Jan 26 00:33:58 crc kubenswrapper[5110]: I0126 00:33:58.314697 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfac5a93-6b85-4bd5-bd4c-c07f116d1e31-catalog-content\") pod \"redhat-operators-8p2lq\" (UID: \"bfac5a93-6b85-4bd5-bd4c-c07f116d1e31\") " pod="openshift-marketplace/redhat-operators-8p2lq" Jan 26 00:33:58 crc kubenswrapper[5110]: I0126 00:33:58.314766 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfac5a93-6b85-4bd5-bd4c-c07f116d1e31-utilities\") pod \"redhat-operators-8p2lq\" (UID: \"bfac5a93-6b85-4bd5-bd4c-c07f116d1e31\") " pod="openshift-marketplace/redhat-operators-8p2lq" Jan 26 00:33:58 crc kubenswrapper[5110]: I0126 00:33:58.339626 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-76r4r\" (UniqueName: \"kubernetes.io/projected/bfac5a93-6b85-4bd5-bd4c-c07f116d1e31-kube-api-access-76r4r\") pod \"redhat-operators-8p2lq\" (UID: \"bfac5a93-6b85-4bd5-bd4c-c07f116d1e31\") " pod="openshift-marketplace/redhat-operators-8p2lq" Jan 26 00:33:58 crc kubenswrapper[5110]: I0126 00:33:58.410063 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8p2lq" Jan 26 00:33:58 crc kubenswrapper[5110]: I0126 00:33:58.660525 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8p2lq"] Jan 26 00:33:58 crc kubenswrapper[5110]: I0126 00:33:58.845197 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8p2lq" event={"ID":"bfac5a93-6b85-4bd5-bd4c-c07f116d1e31","Type":"ContainerStarted","Data":"cee4d5be758629068e1cad37612d68c60dd85dc14c3390ae926fac12e6b711d6"} Jan 26 00:33:59 crc kubenswrapper[5110]: I0126 00:33:59.856364 5110 generic.go:358] "Generic (PLEG): container finished" podID="bfac5a93-6b85-4bd5-bd4c-c07f116d1e31" containerID="5103402269d9e25ea838063f391f6a878c12a29d8814b8d2f944abaa731ea2eb" exitCode=0 Jan 26 00:33:59 crc kubenswrapper[5110]: I0126 00:33:59.856476 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8p2lq" event={"ID":"bfac5a93-6b85-4bd5-bd4c-c07f116d1e31","Type":"ContainerDied","Data":"5103402269d9e25ea838063f391f6a878c12a29d8814b8d2f944abaa731ea2eb"} Jan 26 00:34:00 crc kubenswrapper[5110]: I0126 00:34:00.152136 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489794-dl9qt"] Jan 26 00:34:00 crc kubenswrapper[5110]: I0126 00:34:00.158845 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489794-dl9qt" Jan 26 00:34:00 crc kubenswrapper[5110]: I0126 00:34:00.162633 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:34:00 crc kubenswrapper[5110]: I0126 00:34:00.163044 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dfffg\"" Jan 26 00:34:00 crc kubenswrapper[5110]: I0126 00:34:00.163316 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:34:00 crc kubenswrapper[5110]: I0126 00:34:00.170408 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489794-dl9qt"] Jan 26 00:34:00 crc kubenswrapper[5110]: I0126 00:34:00.203411 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn5zk\" (UniqueName: \"kubernetes.io/projected/732e1a89-d566-4f98-b0aa-793546368014-kube-api-access-mn5zk\") pod \"auto-csr-approver-29489794-dl9qt\" (UID: \"732e1a89-d566-4f98-b0aa-793546368014\") " pod="openshift-infra/auto-csr-approver-29489794-dl9qt" Jan 26 00:34:00 crc kubenswrapper[5110]: I0126 00:34:00.305281 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mn5zk\" (UniqueName: \"kubernetes.io/projected/732e1a89-d566-4f98-b0aa-793546368014-kube-api-access-mn5zk\") pod \"auto-csr-approver-29489794-dl9qt\" (UID: \"732e1a89-d566-4f98-b0aa-793546368014\") " pod="openshift-infra/auto-csr-approver-29489794-dl9qt" Jan 26 00:34:00 crc kubenswrapper[5110]: I0126 00:34:00.351266 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn5zk\" (UniqueName: \"kubernetes.io/projected/732e1a89-d566-4f98-b0aa-793546368014-kube-api-access-mn5zk\") pod \"auto-csr-approver-29489794-dl9qt\" (UID: \"732e1a89-d566-4f98-b0aa-793546368014\") " pod="openshift-infra/auto-csr-approver-29489794-dl9qt" Jan 26 00:34:00 crc kubenswrapper[5110]: I0126 00:34:00.488869 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489794-dl9qt" Jan 26 00:34:00 crc kubenswrapper[5110]: I0126 00:34:00.773022 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489794-dl9qt"] Jan 26 00:34:00 crc kubenswrapper[5110]: W0126 00:34:00.789456 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod732e1a89_d566_4f98_b0aa_793546368014.slice/crio-8bb9ec2e3b2c02d31421ecf5de7a092d7bcedf93932c02e4d17306a4da4a07ab WatchSource:0}: Error finding container 8bb9ec2e3b2c02d31421ecf5de7a092d7bcedf93932c02e4d17306a4da4a07ab: Status 404 returned error can't find the container with id 8bb9ec2e3b2c02d31421ecf5de7a092d7bcedf93932c02e4d17306a4da4a07ab Jan 26 00:34:00 crc kubenswrapper[5110]: I0126 00:34:00.866692 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8p2lq" event={"ID":"bfac5a93-6b85-4bd5-bd4c-c07f116d1e31","Type":"ContainerStarted","Data":"395e84d34aad173f0d8743960c8433369b457f20e0b1ec195b75bb4bd1660f66"} Jan 26 00:34:00 crc kubenswrapper[5110]: I0126 00:34:00.868385 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489794-dl9qt" event={"ID":"732e1a89-d566-4f98-b0aa-793546368014","Type":"ContainerStarted","Data":"8bb9ec2e3b2c02d31421ecf5de7a092d7bcedf93932c02e4d17306a4da4a07ab"} Jan 26 00:34:01 crc kubenswrapper[5110]: I0126 00:34:01.881564 5110 generic.go:358] "Generic (PLEG): container finished" podID="bfac5a93-6b85-4bd5-bd4c-c07f116d1e31" containerID="395e84d34aad173f0d8743960c8433369b457f20e0b1ec195b75bb4bd1660f66" exitCode=0 Jan 26 00:34:01 crc kubenswrapper[5110]: I0126 00:34:01.881677 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8p2lq" event={"ID":"bfac5a93-6b85-4bd5-bd4c-c07f116d1e31","Type":"ContainerDied","Data":"395e84d34aad173f0d8743960c8433369b457f20e0b1ec195b75bb4bd1660f66"} Jan 26 00:34:02 crc kubenswrapper[5110]: I0126 00:34:02.903307 5110 generic.go:358] "Generic (PLEG): container finished" podID="732e1a89-d566-4f98-b0aa-793546368014" containerID="5243d58d0d83ddb287226baab42eb3ff4ae80470511bb533912029979010da7c" exitCode=0 Jan 26 00:34:02 crc kubenswrapper[5110]: I0126 00:34:02.903390 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489794-dl9qt" event={"ID":"732e1a89-d566-4f98-b0aa-793546368014","Type":"ContainerDied","Data":"5243d58d0d83ddb287226baab42eb3ff4ae80470511bb533912029979010da7c"} Jan 26 00:34:02 crc kubenswrapper[5110]: I0126 00:34:02.907865 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8p2lq" event={"ID":"bfac5a93-6b85-4bd5-bd4c-c07f116d1e31","Type":"ContainerStarted","Data":"62d28819e5ae1ea0669c728ec3dadfb18f328ca531c819b78da5d735c0a1f041"} Jan 26 00:34:03 crc kubenswrapper[5110]: I0126 00:34:03.022676 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8p2lq" podStartSLOduration=4.251625844 podStartE2EDuration="5.022647397s" podCreationTimestamp="2026-01-26 00:33:58 +0000 UTC" firstStartedPulling="2026-01-26 00:33:59.858167788 +0000 UTC m=+1497.087066437" lastFinishedPulling="2026-01-26 00:34:00.629189371 +0000 UTC m=+1497.858087990" observedRunningTime="2026-01-26 00:34:03.020091435 +0000 UTC m=+1500.248990044" watchObservedRunningTime="2026-01-26 00:34:03.022647397 +0000 UTC m=+1500.251546006" Jan 26 00:34:03 crc kubenswrapper[5110]: I0126 00:34:03.775528 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_61968fab-ac43-43ab-97ea-814704095718/docker-build/0.log" Jan 26 00:34:03 crc kubenswrapper[5110]: I0126 00:34:03.778871 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf/docker-build/0.log" Jan 26 00:34:03 crc kubenswrapper[5110]: I0126 00:34:03.781286 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_61968fab-ac43-43ab-97ea-814704095718/docker-build/0.log" Jan 26 00:34:03 crc kubenswrapper[5110]: I0126 00:34:03.781294 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_f8838ea6-b825-4903-a386-edaf6e7372c6/docker-build/0.log" Jan 26 00:34:03 crc kubenswrapper[5110]: I0126 00:34:03.784885 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_2c4ff42e-e8a1-4cc5-bc30-ce96d430c9cf/docker-build/0.log" Jan 26 00:34:03 crc kubenswrapper[5110]: I0126 00:34:03.786362 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_0ec8732b-c9f0-4b4e-8774-673da5c59114/docker-build/0.log" Jan 26 00:34:03 crc kubenswrapper[5110]: I0126 00:34:03.786665 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_f8838ea6-b825-4903-a386-edaf6e7372c6/docker-build/0.log" Jan 26 00:34:03 crc kubenswrapper[5110]: I0126 00:34:03.790303 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_0ec8732b-c9f0-4b4e-8774-673da5c59114/docker-build/0.log" Jan 26 00:34:03 crc kubenswrapper[5110]: I0126 00:34:03.855623 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jh4hk_f2948d2b-fac7-4f3f-8b5f-f6f9c914daec/kube-multus/0.log" Jan 26 00:34:03 crc kubenswrapper[5110]: I0126 00:34:03.856273 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jh4hk_f2948d2b-fac7-4f3f-8b5f-f6f9c914daec/kube-multus/0.log" Jan 26 00:34:03 crc kubenswrapper[5110]: I0126 00:34:03.863735 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 26 00:34:03 crc kubenswrapper[5110]: I0126 00:34:03.863854 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 26 00:34:04 crc kubenswrapper[5110]: I0126 00:34:04.222359 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489794-dl9qt" Jan 26 00:34:04 crc kubenswrapper[5110]: I0126 00:34:04.317904 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn5zk\" (UniqueName: \"kubernetes.io/projected/732e1a89-d566-4f98-b0aa-793546368014-kube-api-access-mn5zk\") pod \"732e1a89-d566-4f98-b0aa-793546368014\" (UID: \"732e1a89-d566-4f98-b0aa-793546368014\") " Jan 26 00:34:04 crc kubenswrapper[5110]: I0126 00:34:04.329859 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/732e1a89-d566-4f98-b0aa-793546368014-kube-api-access-mn5zk" (OuterVolumeSpecName: "kube-api-access-mn5zk") pod "732e1a89-d566-4f98-b0aa-793546368014" (UID: "732e1a89-d566-4f98-b0aa-793546368014"). InnerVolumeSpecName "kube-api-access-mn5zk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:34:04 crc kubenswrapper[5110]: I0126 00:34:04.420287 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mn5zk\" (UniqueName: \"kubernetes.io/projected/732e1a89-d566-4f98-b0aa-793546368014-kube-api-access-mn5zk\") on node \"crc\" DevicePath \"\"" Jan 26 00:34:04 crc kubenswrapper[5110]: I0126 00:34:04.931685 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489794-dl9qt" event={"ID":"732e1a89-d566-4f98-b0aa-793546368014","Type":"ContainerDied","Data":"8bb9ec2e3b2c02d31421ecf5de7a092d7bcedf93932c02e4d17306a4da4a07ab"} Jan 26 00:34:04 crc kubenswrapper[5110]: I0126 00:34:04.932265 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bb9ec2e3b2c02d31421ecf5de7a092d7bcedf93932c02e4d17306a4da4a07ab" Jan 26 00:34:04 crc kubenswrapper[5110]: I0126 00:34:04.932463 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489794-dl9qt" Jan 26 00:34:05 crc kubenswrapper[5110]: I0126 00:34:05.288341 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29489788-hq95m"] Jan 26 00:34:05 crc kubenswrapper[5110]: I0126 00:34:05.296506 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29489788-hq95m"] Jan 26 00:34:05 crc kubenswrapper[5110]: I0126 00:34:05.327872 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e125fde3-1b33-4b6d-92ee-bdeada9c1202" path="/var/lib/kubelet/pods/e125fde3-1b33-4b6d-92ee-bdeada9c1202/volumes" Jan 26 00:34:08 crc kubenswrapper[5110]: I0126 00:34:08.410864 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8p2lq" Jan 26 00:34:08 crc kubenswrapper[5110]: I0126 00:34:08.411417 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-8p2lq" Jan 26 00:34:08 crc kubenswrapper[5110]: I0126 00:34:08.459108 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8p2lq" Jan 26 00:34:09 crc kubenswrapper[5110]: I0126 00:34:09.019619 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8p2lq" Jan 26 00:34:09 crc kubenswrapper[5110]: I0126 00:34:09.097141 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8p2lq"] Jan 26 00:34:10 crc kubenswrapper[5110]: I0126 00:34:10.983948 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8p2lq" podUID="bfac5a93-6b85-4bd5-bd4c-c07f116d1e31" containerName="registry-server" containerID="cri-o://62d28819e5ae1ea0669c728ec3dadfb18f328ca531c819b78da5d735c0a1f041" gracePeriod=2 Jan 26 00:34:11 crc kubenswrapper[5110]: I0126 00:34:11.974316 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8p2lq" Jan 26 00:34:11 crc kubenswrapper[5110]: I0126 00:34:11.996084 5110 generic.go:358] "Generic (PLEG): container finished" podID="bfac5a93-6b85-4bd5-bd4c-c07f116d1e31" containerID="62d28819e5ae1ea0669c728ec3dadfb18f328ca531c819b78da5d735c0a1f041" exitCode=0 Jan 26 00:34:11 crc kubenswrapper[5110]: I0126 00:34:11.996270 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8p2lq" event={"ID":"bfac5a93-6b85-4bd5-bd4c-c07f116d1e31","Type":"ContainerDied","Data":"62d28819e5ae1ea0669c728ec3dadfb18f328ca531c819b78da5d735c0a1f041"} Jan 26 00:34:11 crc kubenswrapper[5110]: I0126 00:34:11.996306 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8p2lq" Jan 26 00:34:11 crc kubenswrapper[5110]: I0126 00:34:11.996356 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8p2lq" event={"ID":"bfac5a93-6b85-4bd5-bd4c-c07f116d1e31","Type":"ContainerDied","Data":"cee4d5be758629068e1cad37612d68c60dd85dc14c3390ae926fac12e6b711d6"} Jan 26 00:34:11 crc kubenswrapper[5110]: I0126 00:34:11.996393 5110 scope.go:117] "RemoveContainer" containerID="62d28819e5ae1ea0669c728ec3dadfb18f328ca531c819b78da5d735c0a1f041" Jan 26 00:34:12 crc kubenswrapper[5110]: I0126 00:34:12.031021 5110 scope.go:117] "RemoveContainer" containerID="395e84d34aad173f0d8743960c8433369b457f20e0b1ec195b75bb4bd1660f66" Jan 26 00:34:12 crc kubenswrapper[5110]: I0126 00:34:12.039611 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfac5a93-6b85-4bd5-bd4c-c07f116d1e31-utilities\") pod \"bfac5a93-6b85-4bd5-bd4c-c07f116d1e31\" (UID: \"bfac5a93-6b85-4bd5-bd4c-c07f116d1e31\") " Jan 26 00:34:12 crc kubenswrapper[5110]: I0126 00:34:12.039714 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfac5a93-6b85-4bd5-bd4c-c07f116d1e31-catalog-content\") pod \"bfac5a93-6b85-4bd5-bd4c-c07f116d1e31\" (UID: \"bfac5a93-6b85-4bd5-bd4c-c07f116d1e31\") " Jan 26 00:34:12 crc kubenswrapper[5110]: I0126 00:34:12.039912 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76r4r\" (UniqueName: \"kubernetes.io/projected/bfac5a93-6b85-4bd5-bd4c-c07f116d1e31-kube-api-access-76r4r\") pod \"bfac5a93-6b85-4bd5-bd4c-c07f116d1e31\" (UID: \"bfac5a93-6b85-4bd5-bd4c-c07f116d1e31\") " Jan 26 00:34:12 crc kubenswrapper[5110]: I0126 00:34:12.043087 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfac5a93-6b85-4bd5-bd4c-c07f116d1e31-utilities" (OuterVolumeSpecName: "utilities") pod "bfac5a93-6b85-4bd5-bd4c-c07f116d1e31" (UID: "bfac5a93-6b85-4bd5-bd4c-c07f116d1e31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:34:12 crc kubenswrapper[5110]: I0126 00:34:12.052116 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfac5a93-6b85-4bd5-bd4c-c07f116d1e31-kube-api-access-76r4r" (OuterVolumeSpecName: "kube-api-access-76r4r") pod "bfac5a93-6b85-4bd5-bd4c-c07f116d1e31" (UID: "bfac5a93-6b85-4bd5-bd4c-c07f116d1e31"). InnerVolumeSpecName "kube-api-access-76r4r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:34:12 crc kubenswrapper[5110]: I0126 00:34:12.080273 5110 scope.go:117] "RemoveContainer" containerID="5103402269d9e25ea838063f391f6a878c12a29d8814b8d2f944abaa731ea2eb" Jan 26 00:34:12 crc kubenswrapper[5110]: I0126 00:34:12.111643 5110 scope.go:117] "RemoveContainer" containerID="62d28819e5ae1ea0669c728ec3dadfb18f328ca531c819b78da5d735c0a1f041" Jan 26 00:34:12 crc kubenswrapper[5110]: E0126 00:34:12.112868 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62d28819e5ae1ea0669c728ec3dadfb18f328ca531c819b78da5d735c0a1f041\": container with ID starting with 62d28819e5ae1ea0669c728ec3dadfb18f328ca531c819b78da5d735c0a1f041 not found: ID does not exist" containerID="62d28819e5ae1ea0669c728ec3dadfb18f328ca531c819b78da5d735c0a1f041" Jan 26 00:34:12 crc kubenswrapper[5110]: I0126 00:34:12.112919 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62d28819e5ae1ea0669c728ec3dadfb18f328ca531c819b78da5d735c0a1f041"} err="failed to get container status \"62d28819e5ae1ea0669c728ec3dadfb18f328ca531c819b78da5d735c0a1f041\": rpc error: code = NotFound desc = could not find container \"62d28819e5ae1ea0669c728ec3dadfb18f328ca531c819b78da5d735c0a1f041\": container with ID starting with 62d28819e5ae1ea0669c728ec3dadfb18f328ca531c819b78da5d735c0a1f041 not found: ID does not exist" Jan 26 00:34:12 crc kubenswrapper[5110]: I0126 00:34:12.112958 5110 scope.go:117] "RemoveContainer" containerID="395e84d34aad173f0d8743960c8433369b457f20e0b1ec195b75bb4bd1660f66" Jan 26 00:34:12 crc kubenswrapper[5110]: E0126 00:34:12.113789 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"395e84d34aad173f0d8743960c8433369b457f20e0b1ec195b75bb4bd1660f66\": container with ID starting with 395e84d34aad173f0d8743960c8433369b457f20e0b1ec195b75bb4bd1660f66 not found: ID does not exist" containerID="395e84d34aad173f0d8743960c8433369b457f20e0b1ec195b75bb4bd1660f66" Jan 26 00:34:12 crc kubenswrapper[5110]: I0126 00:34:12.113844 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"395e84d34aad173f0d8743960c8433369b457f20e0b1ec195b75bb4bd1660f66"} err="failed to get container status \"395e84d34aad173f0d8743960c8433369b457f20e0b1ec195b75bb4bd1660f66\": rpc error: code = NotFound desc = could not find container \"395e84d34aad173f0d8743960c8433369b457f20e0b1ec195b75bb4bd1660f66\": container with ID starting with 395e84d34aad173f0d8743960c8433369b457f20e0b1ec195b75bb4bd1660f66 not found: ID does not exist" Jan 26 00:34:12 crc kubenswrapper[5110]: I0126 00:34:12.113863 5110 scope.go:117] "RemoveContainer" containerID="5103402269d9e25ea838063f391f6a878c12a29d8814b8d2f944abaa731ea2eb" Jan 26 00:34:12 crc kubenswrapper[5110]: E0126 00:34:12.114350 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5103402269d9e25ea838063f391f6a878c12a29d8814b8d2f944abaa731ea2eb\": container with ID starting with 5103402269d9e25ea838063f391f6a878c12a29d8814b8d2f944abaa731ea2eb not found: ID does not exist" containerID="5103402269d9e25ea838063f391f6a878c12a29d8814b8d2f944abaa731ea2eb" Jan 26 00:34:12 crc kubenswrapper[5110]: I0126 00:34:12.114427 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5103402269d9e25ea838063f391f6a878c12a29d8814b8d2f944abaa731ea2eb"} err="failed to get container status \"5103402269d9e25ea838063f391f6a878c12a29d8814b8d2f944abaa731ea2eb\": rpc error: code = NotFound desc = could not find container \"5103402269d9e25ea838063f391f6a878c12a29d8814b8d2f944abaa731ea2eb\": container with ID starting with 5103402269d9e25ea838063f391f6a878c12a29d8814b8d2f944abaa731ea2eb not found: ID does not exist" Jan 26 00:34:12 crc kubenswrapper[5110]: I0126 00:34:12.142033 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfac5a93-6b85-4bd5-bd4c-c07f116d1e31-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:34:12 crc kubenswrapper[5110]: I0126 00:34:12.142074 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-76r4r\" (UniqueName: \"kubernetes.io/projected/bfac5a93-6b85-4bd5-bd4c-c07f116d1e31-kube-api-access-76r4r\") on node \"crc\" DevicePath \"\"" Jan 26 00:34:12 crc kubenswrapper[5110]: I0126 00:34:12.342374 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfac5a93-6b85-4bd5-bd4c-c07f116d1e31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bfac5a93-6b85-4bd5-bd4c-c07f116d1e31" (UID: "bfac5a93-6b85-4bd5-bd4c-c07f116d1e31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:34:12 crc kubenswrapper[5110]: I0126 00:34:12.345658 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfac5a93-6b85-4bd5-bd4c-c07f116d1e31-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:34:12 crc kubenswrapper[5110]: I0126 00:34:12.641828 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8p2lq"] Jan 26 00:34:12 crc kubenswrapper[5110]: I0126 00:34:12.648840 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8p2lq"] Jan 26 00:34:13 crc kubenswrapper[5110]: I0126 00:34:13.327550 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfac5a93-6b85-4bd5-bd4c-c07f116d1e31" path="/var/lib/kubelet/pods/bfac5a93-6b85-4bd5-bd4c-c07f116d1e31/volumes" Jan 26 00:34:17 crc kubenswrapper[5110]: I0126 00:34:17.268417 5110 scope.go:117] "RemoveContainer" containerID="4d3cad43e3f6d13e2df60b786c8ff4ecbfe824f40f5c735710b47de9176f153c" Jan 26 00:34:26 crc kubenswrapper[5110]: I0126 00:34:26.812653 5110 patch_prober.go:28] interesting pod/machine-config-daemon-c6tpr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:34:26 crc kubenswrapper[5110]: I0126 00:34:26.813305 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" podUID="f15bed73-d669-439f-9828-7b952d9bfe65" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:34:56 crc kubenswrapper[5110]: I0126 00:34:56.813157 5110 patch_prober.go:28] interesting pod/machine-config-daemon-c6tpr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:34:56 crc kubenswrapper[5110]: I0126 00:34:56.813923 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" podUID="f15bed73-d669-439f-9828-7b952d9bfe65" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:34:56 crc kubenswrapper[5110]: I0126 00:34:56.814010 5110 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" Jan 26 00:34:56 crc kubenswrapper[5110]: I0126 00:34:56.814955 5110 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"538b080a7f2fe02cd7dffd8adbbbd4bde5573141b47811cea8a3b724e939206d"} pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 00:34:56 crc kubenswrapper[5110]: I0126 00:34:56.815026 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" podUID="f15bed73-d669-439f-9828-7b952d9bfe65" containerName="machine-config-daemon" containerID="cri-o://538b080a7f2fe02cd7dffd8adbbbd4bde5573141b47811cea8a3b724e939206d" gracePeriod=600 Jan 26 00:34:57 crc kubenswrapper[5110]: I0126 00:34:57.713421 5110 generic.go:358] "Generic (PLEG): container finished" podID="f15bed73-d669-439f-9828-7b952d9bfe65" containerID="538b080a7f2fe02cd7dffd8adbbbd4bde5573141b47811cea8a3b724e939206d" exitCode=0 Jan 26 00:34:57 crc kubenswrapper[5110]: I0126 00:34:57.713608 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" event={"ID":"f15bed73-d669-439f-9828-7b952d9bfe65","Type":"ContainerDied","Data":"538b080a7f2fe02cd7dffd8adbbbd4bde5573141b47811cea8a3b724e939206d"} Jan 26 00:34:57 crc kubenswrapper[5110]: I0126 00:34:57.714410 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6tpr" event={"ID":"f15bed73-d669-439f-9828-7b952d9bfe65","Type":"ContainerStarted","Data":"49d5ed1c83d0e3e8206209c7feac7beae303d2e5216b55f7f5c39ea03955e6b3"} Jan 26 00:34:57 crc kubenswrapper[5110]: I0126 00:34:57.714442 5110 scope.go:117] "RemoveContainer" containerID="6378941edf2a5c6b68ade18a39cf48e1e5d7713758a77cf9b8edc9cdb2b220a2" Jan 26 00:36:00 crc kubenswrapper[5110]: I0126 00:36:00.165568 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489796-bghkv"] Jan 26 00:36:00 crc kubenswrapper[5110]: I0126 00:36:00.167141 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bfac5a93-6b85-4bd5-bd4c-c07f116d1e31" containerName="extract-utilities" Jan 26 00:36:00 crc kubenswrapper[5110]: I0126 00:36:00.167161 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfac5a93-6b85-4bd5-bd4c-c07f116d1e31" containerName="extract-utilities" Jan 26 00:36:00 crc kubenswrapper[5110]: I0126 00:36:00.167189 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bfac5a93-6b85-4bd5-bd4c-c07f116d1e31" containerName="registry-server" Jan 26 00:36:00 crc kubenswrapper[5110]: I0126 00:36:00.167197 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfac5a93-6b85-4bd5-bd4c-c07f116d1e31" containerName="registry-server" Jan 26 00:36:00 crc kubenswrapper[5110]: I0126 00:36:00.167211 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bfac5a93-6b85-4bd5-bd4c-c07f116d1e31" containerName="extract-content" Jan 26 00:36:00 crc kubenswrapper[5110]: I0126 00:36:00.167219 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfac5a93-6b85-4bd5-bd4c-c07f116d1e31" containerName="extract-content" Jan 26 00:36:00 crc kubenswrapper[5110]: I0126 00:36:00.167477 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="732e1a89-d566-4f98-b0aa-793546368014" containerName="oc" Jan 26 00:36:00 crc kubenswrapper[5110]: I0126 00:36:00.167539 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="732e1a89-d566-4f98-b0aa-793546368014" containerName="oc" Jan 26 00:36:00 crc kubenswrapper[5110]: I0126 00:36:00.168044 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="bfac5a93-6b85-4bd5-bd4c-c07f116d1e31" containerName="registry-server" Jan 26 00:36:00 crc kubenswrapper[5110]: I0126 00:36:00.168082 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="732e1a89-d566-4f98-b0aa-793546368014" containerName="oc" Jan 26 00:36:00 crc kubenswrapper[5110]: I0126 00:36:00.182112 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489796-bghkv"] Jan 26 00:36:00 crc kubenswrapper[5110]: I0126 00:36:00.182386 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489796-bghkv" Jan 26 00:36:00 crc kubenswrapper[5110]: I0126 00:36:00.188317 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:36:00 crc kubenswrapper[5110]: I0126 00:36:00.188470 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:36:00 crc kubenswrapper[5110]: I0126 00:36:00.190357 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dfffg\"" Jan 26 00:36:00 crc kubenswrapper[5110]: I0126 00:36:00.326754 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzwm6\" (UniqueName: \"kubernetes.io/projected/a7ffe01b-1cb9-40b6-bc1a-0d3d55c1274a-kube-api-access-tzwm6\") pod \"auto-csr-approver-29489796-bghkv\" (UID: \"a7ffe01b-1cb9-40b6-bc1a-0d3d55c1274a\") " pod="openshift-infra/auto-csr-approver-29489796-bghkv" Jan 26 00:36:00 crc kubenswrapper[5110]: I0126 00:36:00.429476 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tzwm6\" (UniqueName: \"kubernetes.io/projected/a7ffe01b-1cb9-40b6-bc1a-0d3d55c1274a-kube-api-access-tzwm6\") pod \"auto-csr-approver-29489796-bghkv\" (UID: \"a7ffe01b-1cb9-40b6-bc1a-0d3d55c1274a\") " pod="openshift-infra/auto-csr-approver-29489796-bghkv" Jan 26 00:36:00 crc kubenswrapper[5110]: I0126 00:36:00.456409 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzwm6\" (UniqueName: \"kubernetes.io/projected/a7ffe01b-1cb9-40b6-bc1a-0d3d55c1274a-kube-api-access-tzwm6\") pod \"auto-csr-approver-29489796-bghkv\" (UID: \"a7ffe01b-1cb9-40b6-bc1a-0d3d55c1274a\") " pod="openshift-infra/auto-csr-approver-29489796-bghkv" Jan 26 00:36:00 crc kubenswrapper[5110]: I0126 00:36:00.544570 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489796-bghkv" Jan 26 00:36:01 crc kubenswrapper[5110]: I0126 00:36:01.036360 5110 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 00:36:01 crc kubenswrapper[5110]: I0126 00:36:01.036588 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489796-bghkv"] Jan 26 00:36:01 crc kubenswrapper[5110]: I0126 00:36:01.420100 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489796-bghkv" event={"ID":"a7ffe01b-1cb9-40b6-bc1a-0d3d55c1274a","Type":"ContainerStarted","Data":"f378b0b1cb93a45f42f2e219ad58bf64c64b2dacdaa31e12c139a8063827968b"} Jan 26 00:36:03 crc kubenswrapper[5110]: I0126 00:36:03.489220 5110 generic.go:358] "Generic (PLEG): container finished" podID="a7ffe01b-1cb9-40b6-bc1a-0d3d55c1274a" containerID="5237c05e34952992064947b0ccaeb3aeae7b09b69a8c2a19b1f476f4ccf0b414" exitCode=0 Jan 26 00:36:03 crc kubenswrapper[5110]: I0126 00:36:03.503397 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489796-bghkv" event={"ID":"a7ffe01b-1cb9-40b6-bc1a-0d3d55c1274a","Type":"ContainerDied","Data":"5237c05e34952992064947b0ccaeb3aeae7b09b69a8c2a19b1f476f4ccf0b414"} Jan 26 00:36:04 crc kubenswrapper[5110]: I0126 00:36:04.808975 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489796-bghkv" Jan 26 00:36:04 crc kubenswrapper[5110]: I0126 00:36:04.890296 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzwm6\" (UniqueName: \"kubernetes.io/projected/a7ffe01b-1cb9-40b6-bc1a-0d3d55c1274a-kube-api-access-tzwm6\") pod \"a7ffe01b-1cb9-40b6-bc1a-0d3d55c1274a\" (UID: \"a7ffe01b-1cb9-40b6-bc1a-0d3d55c1274a\") " Jan 26 00:36:04 crc kubenswrapper[5110]: I0126 00:36:04.908090 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7ffe01b-1cb9-40b6-bc1a-0d3d55c1274a-kube-api-access-tzwm6" (OuterVolumeSpecName: "kube-api-access-tzwm6") pod "a7ffe01b-1cb9-40b6-bc1a-0d3d55c1274a" (UID: "a7ffe01b-1cb9-40b6-bc1a-0d3d55c1274a"). InnerVolumeSpecName "kube-api-access-tzwm6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:36:04 crc kubenswrapper[5110]: I0126 00:36:04.992787 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tzwm6\" (UniqueName: \"kubernetes.io/projected/a7ffe01b-1cb9-40b6-bc1a-0d3d55c1274a-kube-api-access-tzwm6\") on node \"crc\" DevicePath \"\"" Jan 26 00:36:05 crc kubenswrapper[5110]: I0126 00:36:05.518136 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489796-bghkv" Jan 26 00:36:05 crc kubenswrapper[5110]: I0126 00:36:05.518154 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489796-bghkv" event={"ID":"a7ffe01b-1cb9-40b6-bc1a-0d3d55c1274a","Type":"ContainerDied","Data":"f378b0b1cb93a45f42f2e219ad58bf64c64b2dacdaa31e12c139a8063827968b"} Jan 26 00:36:05 crc kubenswrapper[5110]: I0126 00:36:05.518700 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f378b0b1cb93a45f42f2e219ad58bf64c64b2dacdaa31e12c139a8063827968b" Jan 26 00:36:05 crc kubenswrapper[5110]: I0126 00:36:05.888668 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29489790-7mjlr"] Jan 26 00:36:05 crc kubenswrapper[5110]: I0126 00:36:05.893164 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29489790-7mjlr"] Jan 26 00:36:07 crc kubenswrapper[5110]: I0126 00:36:07.330584 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c184d7fd-c527-4955-920d-d058efb87466" path="/var/lib/kubelet/pods/c184d7fd-c527-4955-920d-d058efb87466/volumes" Jan 26 00:36:17 crc kubenswrapper[5110]: I0126 00:36:17.592020 5110 scope.go:117] "RemoveContainer" containerID="a5c47539112c424d698b49041ce316d04047ffeaa1aeef92aa0bb5cabbea8e1b"